SRX

last person joined: 22 hours ago 

Ask questions and share experiences about the SRX Series, vSRX, and cSRX.
  • 1.  SRX SPC and NPC Modules and Throughput of SRX Firewall

    Posted 06-25-2013 00:40

    Hi, 

     

    I have SRX3400 Cluster, I have query, how to increase throughput by using two or more SPC or NPC modules. Just let me know, how throughput is related to SPC and NPC modules. Which module help us to increase throughput and how much single module can increase throughput. 

     

    If these modules not related to throughput. Why we have option to install multiple SPC or NPC modules.

     

    Regards,

    Atif.



  • 2.  RE: SRX SPC and NPC Modules and Throughput of SRX Firewall

    Posted 06-27-2013 17:08

    Your SPC gives you more bandwidth and your NPC gives more sessions.

     

    For the SRX3000 series see this spec sheet.

     

    http://www.juniper.net/elqNow/elqRedir.htm?ref=http://www.juniper.net/us/en/local/pdf/datasheets/1000267-en.pdf

     

    Your 3400 hundred supports up to 3 SPC giving about 10 GB of performance each for a max of 30 GB.

     

    The 3400 supports up to 2 NPC each will support about 125k sessions for a maximum of 250k.

     

    This topic for understanding flow in the SRX datacenter series may be helpful.

     

    http://www.juniper.net/techpubs/en_US/junos11.4/topics/concept/session-based-processing-for-srx3000-line-overview.html



  • 3.  RE: SRX SPC and NPC Modules and Throughput of SRX Firewall

    Posted 07-13-2015 19:39

    Steve, I have a srx 3600 but the maximum throughput is 55 gbps, in your post say that each spc give 10gb and the srx 3600 support until 7 cards

    my question is, ¿what performance increase each spc and npc in srx 3600?

     

    regards



  • 4.  RE: SRX SPC and NPC Modules and Throughput of SRX Firewall

     
    Posted 07-13-2015 20:16

    Hello ,

     

    ust to add some point here , SRX 3600 Maximum cards support are 7 SPC and 3 NPC , The main throughput is calculated based on the NPC card since the traffic flows accros the :

     

    IOC --> Fabric --> NPC ---> Fabric ---> SPC

     

    This is how the data flows in the SRX3600 Architecture . Now SPC is mainly to handle the session details and Security processing .  So even if you have at max 7 SPCs , you can insert only at MAX 3 NPC cards in SRX3600 chassis , since it is supported only on the rare slots . And each NPC can handle only 10 Gbps . That max the firewall MAX firewall performance to be almost 30 Gbps . So even we have an option to increase the line cards , the firewall performance will be Max ed to nearly 30 Gbps or less  .

     

     

    Reference : http://www.juniper.net/us/en/local/pdf/datasheets/1000267-en.pdf



  • 5.  RE: SRX SPC and NPC Modules and Throughput of SRX Firewall

    Posted 07-14-2015 09:15

    You need to keep in mind there are also NP-IOC's (IOC cards that have NP's built-in):

     

     

    Running Express Path on these NP-IOC's will give you 10Gbps of IMIX per card of FW throughput. As long as you don't trigger advanced-services lookups (IPS, APP-ID, etc), the flows will not be passed to the SPC's for processing.

     

    They can fit in:

    • SRX3600: Front slots labeled 1-6 and rear slots labeled 7-12.


  • 6.  RE: SRX SPC and NPC Modules and Throughput of SRX Firewall

    Posted 06-29-2013 00:44
    To understand NPCs, you need to consider IOCs. I'll try to explain:

    Each IOC binds to exactly one NPC
    Multiple IOCs can be bound to one NPC
    Multiple NPCs cannot bind to one IOC, each NPC will bind to a separate IOC

    The built-in ports count as one IOC

    Some numbers:

    Each IOC has a 10 Gb full duplex connection to the fabric
    Each SPC has a 10Gb full duplex connection to the fabric
    Each NPC has two 10 Gb full duplex connections to the fabric: one towards the IOCs, and one towards the SPCs

    Basic traffic flow:

    Traffic enters an IOC, gets through the fabric to the associated ingress NPC
    The ingress NPC load-balances to an SPC (*), traffic gets to the SPC via the fabric
    SPC does processing, sends traffic back out to NPC of egress IOC via fabric
    Egress NPC sends traffic to egress IOC via the fabric, traffic leaves the IOC


    Making yourself a diagram of that flow will likely be really useful.

    What does that mean in practice? Lets consider the 2x10G IOC.

    It means that the 2x10G IOC is 2:1 oversubscribed: It can only handle 10G full duplex, because of the constraints of the fabric connections.

    Say you have one 2x10G IOC. You are using both ports, and for sake of argument, you have SPCs sufficient to handle 5G of traffic. You want to get to 10G. In that case, just add SPCs. Adding an additional IOC and NPC isn't going to help, your bottleneck are the SPCs.

    Case 2: You now want to get beyond 10G throughput, You'd add more SPCs (maximum 7 on an srx 3600 btw), then add an additional IOC and NPC. You now have two 2x10G IOCs, but only use one port on each. You have two NPCs, which means each IOC can handle its full 10 Gb, and, SPC throughput willing, you'll get 20 Gb of traffic through the unit, 10G in each direction (IOC A to IOC B, and vice versa)

    Case 2 and a half: you have 7 SPCs, 2 2x10G IOCs with one port used each, but only one NPC. You'd be stuck at 10 Gb throughput: The fabric connections of the NPC become the bottle neck.

    Okay, you have an srx3400. Which means your real world SPC performance (IMIX, or http) will top out at around 10Gb with three SPCs. In that case, adding extra IOC and NPC isn't going to get you past that 10 Gb limit. The good news is that all of these modules will work in an SRX 3600, so if you need to get beyond 10 Gb, you can move to a new chassis and keep your investment in IOCs, NPCs and SPCs.

    To get beyond 20 Gb of real-world throughput, you'd move to an SRX 5000, space allowing an SRX 5800. It's a different architecture, where IOCs come with built-in NPCs, and this over subscription issue does not exist. Next-gen SPCs mean that you have a lot of performance headroom on that platform. All of which comes at a considerably higher price point than the SRX 3000 series.

    Out of curiosity, what throughput are you aiming at on the SRX 3400?


    (*) simplified - there are flow lookups and first SPC considerations here, but that's largely irrelevant for this discussion