View Only
last person joined: 14 hours ago 

Ask questions and share experiences about ACX Series, CTP Series, MX Series, PTX Series, SSR Series, JRR Series, and all things routing, including portfolios and protocols.
  • 1.  CGNAT sizing

    Posted 29 days ago

    Hi, I need to provide a solution for CGNAT service in a ISP network. I have to provide a method to chose the correct SRX (or a MX with the right number of MX-SPC3).
    what are the correct number to ask for? number of clients (fixed and mobile)?  number of sessions per each client? 
    do you have experiences on those numbers? how many sessions per client  do fixed and mobile clients typically have ?

    Thanks inadvance



    Massimiliano Galizia

  • 2.  RE: CGNAT sizing
    Best Answer

    Posted 27 days ago

    CGNAT on MX is what I implemented for an ISP of 50,000 subscribers.  Mainly residential broadband and some businesses also.  FTTH mainly, with some cable modem and dsl.  We tested and implemented in ~2018.  We first did MX104's with MS-MIC-16G for ~4,000 DSL customers.  A while later (6 months or so) we did the ~4,000 cable modem docsis customers on (2) MX960's with MS-MPC-128G.  ~6 months later we did the ~42,000 ftth customers on (4) MX960's with MX-MPS-128G.  Since then we've added another ~5,000 ftth customers and scaled out the an additional (2) MX960's with that cgnat module.  Along the way we learned alot....

    We saw in early testing, that opening a single web page (, could result in ~250 ports being used.  many are short lived.  much less of them are long lived sessions.  we went with default timers for recycling ports, and terminating sessions... i forget what they are, but i think they differ from icmp, tcp, udp.  I have found out through years of use now, that the MS-MPC-128G tops out around ~67 gbps of throughput.  From a sizing and scale perspective, we did the following...

    - hand out 100 count port blocks per customer
    - hand out only 30 of those port blocks per customer
    - so affectively the customer is limited to 3,000 ports simultaneously
    - we actually started smaller than this and monitored it initially and as we saw allocation errors we increased it
    - i recall using a single publicly routable /24 for each MS-MPC-128G.  i got significant savings of our ipv4 address space with cgnat.  the cgnat module automatically shops up the /24 into (4) /26's and assigns them to the underlying mams interfaces (ms mpc pic's)

    We did have some issues with banking websites and gaming applications having issues or broken through cgnat.  the following items helped in various ways.

    - address pooling paired (app)
    - eif (endpoint independent filtering)
    - eim (endpoint independent mapping)

    from a load balancing perspective, i had to do the following... as I saw all of the traffic arriving at a single mc960.  i had to get the traffic from customers to arrive at all the mx960's to accomplish sufficient load spread

    - all my (3) cgnat domains (dsl, cm, ftth) are implemented using mpls l3vpn... so sharing the igp inet.0 metric with the inet.3 allowed mp-ibpg to make a better decision about routing to the 0/0 default route towards the metrically closest mx960
    - once there, the traffic needed to be spread accross ms or mams service interfaces... this is accomplished using an ams interface, with a load balancing hash key option of source-ip

    I recall hearing the MS-MPC-128G is eol now, and the newer SPC3 is current and more scalable

    Hope some of this helps and gets you going in the right direction

    - Aaron

  • 3.  RE: CGNAT sizing

    Posted 23 days ago

    Ciao @aaron.gould Many thanks for your answer! I appreciated it and shared with my engineering staff to cherish your experience, and be prepared to support our customer.
    Thanks a million again.

    best regards


    Massimiliano Galizia

  • 4.  RE: CGNAT sizing

    Posted 23 days ago

    YW Massimiliano!  An additional consideration... as I mentioned, during my roll-out of cgnat, I migrated all customers into vrf's (mpls l3vpn's)...  

    mpls l3vpn - ftth

    mpls l3vpn - dsl

    mpls l3vpn - cm

    ...this allowed for easy, elegant and powerful import/export of routes using the simple route target (rt) mechanism that mpls l3vpn is based on

    This allows you to easily cause traffic to exit your cgnat boundary node of your choice

    Also, more to my point, implementing ipv6 over mpls l3vpn (aka 6VPE) is now possible and fairly easy since the mpls l3vpn architecture is in place from the previous ipv4 cgnat deployment... so now, adding IPv6 is a simple address family and then dual stack your pe-ce interfaces and protocols (don't forget "set protocols mpls ipv6-tunneling")

    What happens now is, you simply import/export the rt for the public internet boundary node and thus v6 flows naturally out the internet connections, not natted, and the ipv4 private ip packets still flow through the cgnat nodes

    Resulting in a dual stacked cpe with private v4 being cgnat'd and publicly routable v6 flowing un-touched by nat

    I tested this years ago, and am now finally getting close to pulling the trigger on it

    It's either that or buy more ip's addresses and/or beef up the cgnat hardware.... the long term answer as I see it, is IPv6

    Just wanted to share that, as I'm currently expanding my cgnat architecture to once again test ipv6 in order to diminish my reliance on cgnat for 100% of my customer traffic

    - Aaron