Junos OS

 View Only
last person joined: 2 days ago 

Ask questions and share experiences about Junos OS.
  • 1.  aggregated inline service not passing static NAT traffic

    Posted 01-10-2020 14:05

    I'm working on setting up inline static NAT on an MX480 with two MPC 3D 16x 10GE cards in it. I can get it working with just one hardware interface: 

     

     

    # show services service-set static-nat-test
    nat-rules dst-nat-list-rule;
    interface-service {
        service-interface si-0/1/0;
    }
    

     

    But as soon as I try to incorporate some redundancy, it stops passing NAT'd traffic: 

     

    # show services service-set static-nat-test
    nat-rules dst-nat-list-rule;
    interface-service {
        service-interface asi0;
    }
    
    # show interfaces asi0
    aggregated-inline-services-options {
        primary-interface si-0/1/0;
        secondary-interface si-1/1/0;
    }
    unit 0 {
        family inet;
    }
    

    I've looked into service interface pools, but those appear to be for L2TP services (or "if the service set has a PGCP rule configured"... not sure what that is). 

     

     

    The only example of aggregated inline service is also for L2TP, so does that mean we can't use asi's for inline NAT? The config takes just fine; just no data is passed. 

     

    If this isn't the proper way to do things, then how can I have some redundancy for inline NAT? I'm using the service set on an aggregated ethernet connection, so if one MPC dies, the other side picks up just fine. Except in this case inline NAT would stop working until the failed MPC comes back. 

     


    #mx
    #NAT
    #asi


  • 2.  RE: aggregated inline service not passing static NAT traffic

    Posted 01-10-2020 16:33

    Hmm, apparently asi interfaces might be the wrong one? Looking at this example: 

    https://www.juniper.net/documentation/en_US/junos/topics/example/nat-static-source-translation-ams.html

     

    I changed it to an ams interface, but now get this when I try to commit: 

     

    root@edge1# commit check
    re0:
    ../../../../../../src/junos/usr.sbin/spd/spd_extract_service_set.c:6511: insist '0' failed
    error: Check-out pass for Adaptive services process (/usr/sbin/spd) dumped core (0x86)
    error: configuration check-out failed
    

    Here's my config:

     

    root@edge1# top show interfaces ams0
    load-balancing-options {
        member-interface si-0/1/0;
        member-interface si-1/1/0;
        member-failure-options {
            redistribute-all-traffic {
                enable-rejoin;
            }
        }
    }
    unit 1 {
        family inet;
    }
    
    root@edge1# show service-set STATIC-DST-NAT
    nat-rules dst-nat-list-rule;
    interface-service {
        service-interface ams0.1;
        load-balancing-options {
            hash-keys {
                ingress-key destination-ip;
                egress-key source-ip;
            }
        }
    }
    

    Smiley Frustrated



  • 3.  RE: aggregated inline service not passing static NAT traffic
    Best Answer

    Posted 01-11-2020 03:41

    Hello,

     

    Inline NAT is not supported on ASI interface.

    Inline NAT is not supported on AMS interface, only MS-MPC-based NAT is supported on AMS interface.

    If You want Inline NAT High Availability, then due to stateless nature of inline NAT, it can be achieved with simple routing as below. Assuming Your NAT pool is 203.0.113.0/24, use next-hop-style NAT configuration and configure 1 extra static route to point to "backup" SI- interface:

     

    set routing-instances INSIDE-VR routing-options static route 0/0 next-hop si-0/0/0.100
    set routing-instances INSIDE-VR routing-options static route 0/0 next-hop si-1/1/0.100
    set routing-instances OUTSIDE-VR routing-options static route 203.0.113.0/24 next-hop si-1/1/0.200

    There will be 1 more 203.0.113.0/24 [Static/1] return route inserted by JUNOS for Your NAT pool, and this [Static/1] route will be preferred over aforementioned 203.0.113.0/24 [Static/5] return route.

     

    If You want inline NAT with load-balance between SI- interfaces, use next-hop-style NAT configuration and configure 2 static routes as below:

     

    ## Do the 2 static default routes as above
    set routing-instances OUTSIDE-VR routing-options static route 203.0.113.0/24 next-hop si-0/0/0.200 preference 0 set routing-instances OUTSIDE-VR routing-options static route 203.0.113.0/24 next-hop si-1/1/0.200 preference 0

    Because of "preference 0", these 2 statics will be preferred over auto-added 203.0.113.0/24 [Static/1] return route and traffic will load-balance.

    Last but not least - do NOT forget JUNOS "load-balance per-packet" forwarding table policy!

    HTH

    Thx

    Alex

     

     

     



  • 4.  RE: aggregated inline service not passing static NAT traffic

    Posted 01-11-2020 05:14

    Sweet! I really wish it was a bit more clear in the tech docs. I scoured those for hours trying to figure something out. Funny enough, the asi interface took and no errors were thrown (it just didn't work).

     

    I'll try the routing option. Thanks! 



  • 5.  RE: aggregated inline service not passing static NAT traffic

    Posted 01-13-2020 16:48

    Alright, I got the routes to work properly, but am now hitting another snag. I have the VR setup like so: 

     

    {master}[edit routing-instances]
    root@edge1# show
    STATIC-SRC-VR {
        instance-type virtual-router;
        interface si-0/1/0.100;
        routing-options {
            static {
                route 0.0.0.0/0 next-hop si-0/1/0.100;
            }
        }
    }
    

     

    But when I add another static route, commit fails stating I can't use the same pool in two different service-sets (even though its a stateless/inline service). 

     

    {master}[edit routing-instances]
    root@edge1# show
    STATIC-SRC-VR {
        instance-type virtual-router;
        interface si-0/1/0.100;
        interface si-1/1/0.100;
        routing-options {
            static {
                route 0.0.0.0/0 next-hop [ si-0/1/0.100 si-1/1/0.100 ];
            }
        }
    }
    
    
    {master}[edit services]
    root@edge1# show
    service-set STATIC-SRC-NAT-1 {
        nat-rules SRC-NAT-RULE;
        nat-rules src-nat-list-rule;
        next-hop-service {
            inside-service-interface si-0/1/0.100;
            outside-service-interface si-0/1/0.200;
        }
    }
    service-set STATIC-SRC-NAT-2 {
        nat-rules SRC-NAT-RULE;
        nat-rules src-nat-list-rule;
        next-hop-service {
            inside-service-interface si-1/1/0.100;
            outside-service-interface si-1/1/0.200;
        }
    }
    

    The exact error is: 

    root@edge1# commit
    re0:
    [edit services]
      'service-set STATIC-SRC-NAT-2'
        NAT pool 1.2.3.4/32 is already used by service set STATIC-SRC-NAT-1
    error: configuration check-out failed
    

    Where 1.2.3.4/32 is a test public IP in one of the pools. I bet any other pool would throw the same error. 

     

    Does this mean I can't have redundancy with inline NAT? Smiley Frustrated



  • 6.  RE: aggregated inline service not passing static NAT traffic

    Posted 01-13-2020 19:23


  • 7.  RE: aggregated inline service not passing static NAT traffic

    Posted 01-13-2020 21:31

    Spoke too soon. 😛  setting [services nat] allow-overlapping-nat-pools; got the commit to work. The rest took me a bit of trial and error (I'm new to Juniper, so I had to learn a few things along the way). 

     

    Here's what I ended up with that is actually working. I tested it by running inbound and outbound traffic across the inline NAT, then requesting an FPC restart on either slot 0 or 1 and the traffic didn't skip a beat. 

    Spoiler
    {master}[edit routing-instances]
    root@edge1# show
    STATIC-DST-VR {
        instance-type forwarding;
        routing-options {
            static {
                route 0.0.0.0/0 {
                    next-hop [ si-0/1/0.200 si-1/1/0.200 ];
                    preference 0;
                }
            }
        }
    }
    STATIC-SRC-VR {
        instance-type forwarding;
        routing-options {
            static {
                route 0.0.0.0/0 {
                    next-hop [ si-1/1/0.100 si-0/1/0.100 ];
                    preference 0;
                }
            }
        }
    }
    
    {master}[edit routing-options]
    root@edge1# show
    interface-routes {
        rib-group inet static-group;
    }
    static {
        route 7.7.7.0/24 { # inbound public IPs to be NAT'd to private ones
            next-table STATIC-DST-VR.inet.0; 
            preference 0;
        }
    }
    rib-groups {
        static-group {
            import-rib [ inet.0 STATIC-SRC-VR.inet.0 STATIC-DST-VR.inet.0 ];
        }
    }
    forwarding-table {
        export pplb;
    }
    
    {master}[edit policy-options]
    root@edge1# show
    policy-statement pplb {
        then {
            load-balance per-packet;
            accept;
        }
    }
    
    {master}[edit firewall family inet]
    root@edge1# show
    filter static-src-filter {
        term sources {
            from {
                source-prefix-list {
                    STATIC-NAT-PRIVATE;
                }
            }
            then {
                routing-instance STATIC-SRC-VR;
            }
        }
        term default {
            then accept;
        }
    }
    
    {master}[edit interfaces ae0]
    root@edge1# show
    flexible-vlan-tagging;
    aggregated-ether-options {
        minimum-links 1;
        link-speed 10g;
        lacp {
            active;
            periodic fast;
            accept-data;
        }
    }
    unit 100 {
        vlan-id 100;
        family inet {
            filter {
                input static-src-filter; # inbound private 100.64.0.0/10 data to be NAT'd to public ips
            }
            address 10.0.10.1/29;
        }
    }
    
    
    {master}[edit services nat]
    root@edge1# show
    allow-overlapping-nat-pools;
    # (service sets and nat rules omitted)
    

     

    Where 7.7.7/24 are my "public" IPs. 

     

    Whew! I was really hoping an "asi" interface could be smart enough to load balance / make redundant inline NAT. 😛 Feature request? P:  But alas, it's working! 

     

    Thank you