SRX

 View Only
last person joined: yesterday 

Ask questions and share experiences about the SRX Series, vSRX, and cSRX.
Expand all | Collapse all

SRX Cluster Console Ping No route to host

  • 1.  SRX Cluster Console Ping No route to host

    Posted 09-06-2018 07:18

    I have weird communication issue on an SRX1500 cluster running Junos 18.1R2.5. From the console I cannot ping anything through my public interface such as 8.8.8.8. All other communication is working. I am able to ping,ssh to the ip address assigned to my untrust interface. Any traffic from the trust zone to untrust works as intended.

     

    There is a virtual routing instance to split up mgmt interface routing and everything else because the mgmt interface is configured in the same subnet as a sub interface in another zone. 

     

    If I just issue the ping command to my gateway, it says "ping: sendto: No route to host"
    If I issue the command  "ping bypass-routing interface reth0.0 address" I can ping the gateway

     

    config below:

     

    user@srx1500cluster-0> show configuration interfaces 
    ge-0/0/0 {
        description "WAN Uplink pair 1 of 2 - partner ge-7/0/0";
        gigether-options {
            redundant-parent reth0;
        }
    }
    
    xe-0/0/17 {
        description "Trust Interfaces Uplink pair 1 of 2 - partner xe-7/0/17";
        gigether-options {
            redundant-parent reth1;
        }
    }
    
    ge-7/0/0 {
        description "WAN Uplink pair 2 of 2 - partner ge-0/0/0";
        gigether-options {
            redundant-parent reth0;
        }
    }
    
    xe-7/0/17 {
        description "Trust Interfaces Uplink pair 2 of 2 - partner xe-0/0/17";
        gigether-options {
            redundant-parent reth1;
        }
    }
    
    fab0 {
        fabric-options {
            member-interfaces {
                xe-0/0/18;
                xe-0/0/19;
            }
        }
    }
    fab1 {
        fabric-options {
            member-interfaces {
                xe-7/0/18;
                xe-7/0/19;
            }
        }
    }
    fxp0 {
        unit 0 {
            family inet {
                address 10.2.48.10/24 {
                    master-only;
                }
            }
        }
    }
    
    reth0 {
        description "WAN Uplink - ge-0/0/0 & ge-7/0/0";
        redundant-ether-options {
            redundancy-group 1;
        }
        unit 0 {
            family inet {
                address 1.1.1.1/25;
            }
        }
    }
    reth1 {
        description "Trust Interfaces - xe-0/0/17 & xe-7/0/17";
        vlan-tagging;
        redundant-ether-options {
            redundancy-group 1;
        }
        unit 0 {                            
            disable;
            vlan-id 3967;
        }
        unit 40 {
            vlan-id 40;
            family inet {
                address 10.2.40.200/24;
            }
        }
        unit 45 {
            vlan-id 45;
            family inet {
                address 10.2.45.1/24;
            }
        }
        unit 48 {
            vlan-id 48;
            family inet {
                address 10.2.48.1/24;
            }
        }
    }
    
    user@srx1500cluster-0> show configuration routing-instances 
    vr1 {
        instance-type virtual-router;
        interface reth0.0;
        interface reth1.40;
        interface reth1.45;
        routing-options {
            static {
                route 0.0.0.0/0 next-hop 1.1.1.1;
            }
        }
    }
    {primary:node0}
    user@srx1500cluster-0> show configuration routing-options      
    
    
    {primary:node0}
    user@srx1500cluster-0> show configuration security zones security-zone untrust   
    screen untrust-screen;
    interfaces {
        reth0.0 {
            host-inbound-traffic {
                system-services {
                    ping;
                    https;
                    ssh;
                    snmp;
                    netconf;
                    traceroute;
                }
            }
        }
    }
    
    user@srx1500cluster-0# run show configuration chassis cluster    
    control-link-recovery;
    reth-count 4;
    redundancy-group 0 {
        node 0 priority 100;
        node 1 priority 1;
    }
    redundancy-group 1 {
        node 1 priority 1;
        node 0 priority 100;
        preempt;
        interface-monitor {
            xe-0/0/17 weight 255;
            xe-7/0/17 weight 255;
            ge-0/0/0 weight 255;
            ge-7/0/0 weight 255;
        }
    }
    redundancy-group 2 {
        node 1 priority 100;
        node 0 priority 1;
        preempt;
        interface-monitor {
            ge-0/0/1 weight 255;
            ge-7/0/1 weight 255;
            ge-0/0/2 weight 255;
            ge-7/0/2 weight 255;
        }
    }
    
    
    
    user@srx1500cluster-0> show chassis cluster status 
    Monitor Failure codes:
        CS  Cold Sync monitoring        FL  Fabric Connection monitoring
        GR  GRES monitoring             HW  Hardware monitoring
        IF  Interface monitoring        IP  IP monitoring
        LB  Loopback monitoring         MB  Mbuf monitoring
        NH  Nexthop monitoring          NP  NPC monitoring              
        SP  SPU monitoring              SM  Schedule monitoring
        CF  Config Sync monitoring      RE  Relinquish monitoring
     
    Cluster ID: 1
    Node   Priority Status               Preempt Manual   Monitor-failures
    
    Redundancy group: 0 , Failover count: 1
    node0  100      primary              no      no       None           
    node1  1        secondary            no      no       None           
    
    Redundancy group: 1 , Failover count: 1
    node0  100      primary              yes     no       None           
    node1  1        secondary            yes     no       None           
    
    Redundancy group: 2 , Failover count: 2
    node0  1        secondary            yes     no       None           
    node1  100      primary              yes     no       None           
    
    
    
    
    
    
    
    

     

     

     

     



  • 2.  RE: SRX Cluster Console Ping No route to host

    Posted 09-06-2018 07:27
    Hi,
    Did you try to ping using routing-instance option?
    ping 8.8.8.8 routing-instance vr1


  • 3.  RE: SRX Cluster Console Ping No route to host

    Posted 09-06-2018 07:30

    pinging with the routing instance works.



  • 4.  RE: SRX Cluster Console Ping No route to host

    Posted 09-06-2018 07:32

    I was getting the no route before I added the routing instance. I configured the routing-instance after setting up the mgmt interface.



  • 5.  RE: SRX Cluster Console Ping No route to host
    Best Answer

    Posted 09-06-2018 07:47

    Hi,

    The public inerface reth0 is part of vr1 routing-instance and the default route is also configured towards it. So you have to use routing-instance option while pinging from srx console. This is an expected behavior.

    Since the global routing table (inet0) does not have route towards 8.8.8.8, you are getting the error message while pinging without 'routing-instance' option.

     



  • 6.  RE: SRX Cluster Console Ping No route to host

    Posted 09-06-2018 07:50

    Yeah that totally makes sense now. First time messing with routing instance. Thanks for the quick replies



  • 7.  RE: SRX Cluster Console Ping No route to host

    Posted 09-07-2018 09:33

    So I guess I am not completely out of the woods yet. With the routing instance I found some oddities such as ntp or dns not working on the box. So I decided to test by removing the public interface out of the routing instance and back into the default instance. 

     

    I still cannot ping from the SRX console to anything on the web. I can only ping my gateway if I include the "bypass-routing" option to skip the routing table.  

     

     

    {primary:node0}[edit]
    user@srx1500-cluster-0# show routing-instances 
    vr1 {
        instance-type virtual-router;
        interface reth1.40;
        interface reth1.45;
        interface reth1.48;
        routing-options {
            static {
                route 0.0.0.0/0 next-table inet.0;
            }
        }
    }
    
    {
    
    user@srx1500-cluster-0# show routing-options 
    static {
        route 0.0.0.0/0 next-hop 1.1.1.1;
    }
    
    
    
    {primary:node0}
    user@srx1500-cluster-0> show route 
    
    inet.0: 7 destinations, 8 routes (7 active, 0 holddown, 0 hidden)
    + = Active Route, - = Last Active, * = Both
    
    0.0.0.0/0          *[Static/5] 00:00:43
                        > to 1.1.1.1 via reth0.0
    10.2.48.0/24       *[Direct/0] 1d 02:52:35
                        > via fxp0.0
                        [Direct/0] 1d 02:52:35
                        > via fxp0.0
    10.2.48.10/32      *[Local/0] 1d 02:52:35
                          Local via fxp0.0
    10.2.48.11/32      *[Local/0] 1d 02:52:35
                          Local via fxp0.0
    1.1.1.0/25 *[Direct/0] 00:00:43
                        > via reth0.0
    1.1.1.2/32 *[Local/0] 00:00:43
                          Local via reth0.0
    
    safesys-vr.inet.0: 12 destinations, 12 routes (12 active, 0 holddown, 0 hidden)
    + = Active Route, - = Last Active, * = Both
    
    0.0.0.0/0          *[Static/5] 00:00:43
                          to table inet.0
    10.2.40.0/24       *[Direct/0] 1d 02:52:35
                        > via reth1.40
    10.2.40.200/32     *[Local/0] 1d 02:52:35
                          Local via reth1.40
    10.2.48.0/24       *[Direct/0] 1d 01:04:45
                        > via reth1.48
    10.2.48.1/32       *[Local/0] 1d 01:04:45
                          Local via reth1.48
    
    inet6.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
    + = Active Route, - = Last Active, * = Both
    
    ff02::2/128        *[INET6/0] 1d 02:52:35
                          MultiRecv
    
    safesys-vr.inet6.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)
    + = Active Route, - = Last Active, * = Both
    
    ff02::2/128        *[INET6/0] 1d 02:52:35
                          MultiRecv


  • 8.  RE: SRX Cluster Console Ping No route to host

    Posted 09-08-2018 04:56

    By default when you configure system services like ntp, dns, syslog and the like they will make the connection from the master or root routing instance.

     

    When you need them to make the connection from another routing instance you will need to set an explicit source address for that service using the source-address parameter in that service hierarchy.

     



  • 9.  RE: SRX Cluster Console Ping No route to host

    Posted 09-10-2018 07:05

    Thanks for the info.... I have done as you suggested but it still doesn't appear to work. However, the log messages show that it was able to pull NTP from the specified server, but the NTP status is still shows that it failed and "no route to host". System Uptime shows that it is currently using the Local Clock as it's time source. DNS still is not working either. 

     

    {primary:node0}[edit]user@srx1500-cluster-0# show system ntp 
    server 216.239.35.4 routing-instance vr1;
    source-address 1.1.1.1;
    
    
    
    user@srx1500-cluster-0# show system name-server     
    208.67.222.222 source-address 1.1.1.1;
    208.67.220.220 source-address 1.1.1.1;
    
    
    {primary:node0}
    user@srx1500-cluster-0> show ntp status                         
    /usr/bin/ntpq: configured source-address in ntp.conf 1.1.1.1 invalid.
    Using one of the local addresses.
    /usr/bin/ntpq: write to localhost failed: No route to host
    
    
    
    user@srx1500-cluster-0# run show system uptime 
    node0:
    --------------------------------------------------------------------------
    Current time: 2018-09-10 13:48:38 UTC
    Time Source:  LOCAL CLOCK 
    System booted: 2018-09-05 13:00:19 UTC (5d 00:48 ago)
    Protocols started: 2018-09-06 13:30:30 UTC (4d 00:18 ago)
    Last configured: 2018-09-10 13:46:42 UTC (00:01:56 ago) by user
     1:48PM  up 5 days, 48 mins, 1 users, load averages: 0.62, 0.76, 0.62
    
    node1:
    --------------------------------------------------------------------------
    Current time: 2018-09-10 13:48:35 UTC
    Time Source:  LOCAL CLOCK 
    System booted: 2018-09-05 13:00:39 UTC (5d 00:47 ago)
    Last configured: 2018-09-10 13:46:37 UTC (00:01:58 ago) by user
     1:48PM  up 5 days, 48 mins, 0 users, load averages: 0.61, 0.50, 0.36
     
     
     
     
    {primary:node0}[edit]user@srx1500-cluster-0# run ping 216.239.35.4 routing-instance vr1    
    PING 216.239.35.4 (216.239.35.4): 56 data bytes
    64 bytes from 216.239.35.4: icmp_seq=0 ttl=45 time=33.427 ms
    64 bytes from 216.239.35.4: icmp_seq=1 ttl=45 time=33.440 ms
    ^C
    --- 216.239.35.4 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 33.427/33.433/33.440/0.007 ms
    
    {primary:node0}[edit]user@srx1500-cluster-0# run ping 208.67.222.222 routing-instance vr1  
    PING 208.67.222.222 (208.67.222.222): 56 data bytes
    64 bytes from 208.67.222.222: icmp_seq=0 ttl=59 time=14.993 ms
    64 bytes from 208.67.222.222: icmp_seq=1 ttl=59 time=15.200 ms
    ^C
    --- 208.67.222.222 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 14.993/15.096/15.200/0.104 ms
    
    
    
    {primary:node0}[edit]user@srx1500-cluster-0# run show log messages | last | match ntp
    Sep 10 13:45:03.287 2018  srx1500-cluster-0 xntpd[6084]: ntpd 4.2.0-a Fri May 25 22:01:14  2018 (1)
    Sep 10 13:45:03.289 2018  srx1500-cluster-0 xntpd[6084]: precision = 0.106 usec
    Sep 10 13:45:03.289 2018  srx1500-cluster-0 xntpd[6084]: Listening on interface ggsn_vpn, 129.16.0.1#123
    Sep 10 13:45:03.290 2018  srx1500-cluster-0 xntpd[6084]: kernel time sync status 2040
    Sep 10 13:45:03.290 2018  srx1500-cluster-0 xntpd[6084]: frequency initialized 0.000 PPM from /var/db/ntp.drift
    Sep 10 13:45:03.292 2018  srx1500-cluster-0 xntpd[6084]: set NTP minpoll:64 second, maxpoll:1024 second
    Sep 10 13:45:07.688 2018  srx1500-cluster-0 mgd[26725]: UI_CMDLINE_READ_LINE: User 'user', command 'show system ntp '
    Sep 10 13:45:10.323 2018  srx1500-cluster-0 xntpd[6084]: synchronized to 216.239.35.4, stratum=1
    Sep 10 13:45:13.393 2018  srx1500-cluster-0 xntpd[6084]: time reset +3.068685 s
    Sep 10 13:45:13.393 2018  srx1500-cluster-0 xntpd: NTPD_CHANGED_TIME: time reset +3.068685 s
    Sep 10 13:45:13.394 2018  srx1500-cluster-0 xntpd: kernel time sync disabled 2041
    
    
    
    
    
    {primary:node0}
    user@srx1500-cluster-0> ping google.com routing-instance vr1
    ping: cannot resolve google.com: Host name lookup failure


  • 10.  RE: SRX Cluster Console Ping No route to host

    Posted 09-10-2018 15:32

    This suggests that the address you are configuring is not a valid ip interface inside vr1. 

     

    Also remember that if the interface you choose as the source private it will need to hit a nat somewhere along the line to work as a source for these requests.

     



  • 11.  RE: SRX Cluster Console Ping No route to host

    Posted 09-11-2018 07:43

    So I was using the public address as the source interface. Based on your comment, I changed it to use a private sourced address interface. But I still have the same results. I definitely have the source interface in the virtual routing instance. Source NAT is functioning for that subnet.

     

    ######### configured source interface #########
    
    {primary:node0}[edit]
    user@srx1500-cluster-0# show system ntp 
    server 216.239.35.4 routing-instance vr1;
    source-address 10.2.45.1;
    
    {primary:node0}[edit]
    user@srx1500-cluster-0# show system name-server 
    208.67.222.222 source-address 10.2.45.1;
    208.67.220.220 source-address 10.2.45.1;
    
    
    ########## source address 10.2.45.1 is tied to reth1.45 ###########
    
    {primary:node0}[edit]
    user@srx1500-cluster-0# run show interfaces terse | match reth 
    ge-0/0/0.0              up    up   aenet    --> reth0.0
    xe-0/0/17.0             down  up   aenet    --> reth1.0
    xe-0/0/17.45            up    up   aenet    --> reth1.45
    ge-7/0/0.0              up    up   aenet    --> reth0.0
    xe-7/0/17.0             down  up   aenet    --> reth1.0
    xe-7/0/17.45            up    up   aenet    --> reth1.45
    
    
    reth0                   up    up
    reth0.0                 up    up   inet     1.1.1.2/25
    reth1                   up    up
    reth1.0                 down  down
    reth1.45                up    up   inet     10.2.45.1/24    
       
    
    
    
    ####### Configured routing instance; interface reth1.45 is present ########
    
    user@srx1500-cluster-0# show routing-instances   
    vr1 {
        instance-type virtual-router;
        interface reth0.0;
        interface reth1.40;
        interface reth1.45;
        interface reth1.48;
        routing-options {
            static {
                route 0.0.0.0/0 next-hop 1.1.1.1;
            
            }
        }
    }
    
    
    
    
    ####### It should be NAT'd correctly ########
    
    {primary:node0}[edit]
    user@srx1500-cluster-0# show security nat source 
    rule-set trust-to-untrust-nat {
        from zone trust;
        to zone untrust;
        rule trust-nets-to-web-nat {
            match {
                source-address-name [ jnpr-trust-net ];
                destination-address 0.0.0.0/0;
            }
            then {
                source-nat {
                    interface;
                }
            }
        }
    }
    
    
    ###### trust zone configuration with interface reth1.45 ########
    
    security-zone trust {
        interfaces {
            reth1.45 {
                host-inbound-traffic {
                    system-services {
                        ping;
                        https;
                        ssh;
                        traceroute;
                    }
                }
            }
        }
        application-tracking;
    }
    
    
    
    ###### Matching Address object in NAT rule ########
    
    {primary:node0}[edit]
    user@srx1500-cluster-0# show security address-book global address jnpr-trust-net 
    10.2.45.0/24;
    
    
    
    ##### ntp status #######
    
    {primary:node0}[edit]
    user@srx1500-cluster-0# run show ntp status          
    /usr/bin/ntpq: configured source-address in ntp.conf 10.2.45.1 invalid.
    Using one of the local addresses.
    /usr/bin/ntpq: write to localhost failed: No route to host
    
    
    
    ###### routing tables ########
    
    {primary:node0}[edit]
    user@srx1500-cluster-0## run show route 
    
    inet.0: 3 destinations, 4 routes (3 active, 0 holddown, 0 hidden)
    + = Active Route, - = Last Active, * = Both
    
    10.2.48.0/24       *[Direct/0] 5d 01:04:36
                        > via fxp0.0
                        [Direct/0] 5d 01:04:36
                        > via fxp0.0
    10.2.48.10/32      *[Local/0] 5d 01:04:36
                          Local via fxp0.0
    10.2.48.11/32      *[Local/0] 5d 01:04:36
                          Local via fxp0.0
    
    vr1.inet.0: 15 destinations, 15 routes (15 active, 0 holddown, 0 hidden)
    + = Active Route, - = Last Active, * = Both
    
    0.0.0.0/0          *[Static/5] 3d 22:08:35
                        > to 1.1.1.1 via reth0.0
    10.2.45.0/24       *[Direct/0] 5d 01:04:36
                        > via reth1.45
    10.2.45.1/32       *[Local/0] 5d 01:04:36
                          Local via reth1.45
    10.2.48.0/24       *[Direct/0] 4d 23:16:46
                        > via reth1.48
    10.2.48.1/32       *[Local/0] 4d 23:16:46
                          Local via reth1.48
    
    
    


  • 12.  RE: SRX Cluster Console Ping No route to host

    Posted 09-12-2018 02:49

    Sorry I'm not able to visualize your topology from the configuration.  But the nat is likely still an issue.

     

    Note that traffic from the SRX is called self traffic and is NOT a member of any configured zone.  If you need to write policies to affect self traffic they are written to or from the junos-host zone.  As a result no self traffic will be processed by your nat rule.

     

    I also notice that your nat rule is using an SRX interface as the public address.  So the simple solution then would be to source your public service requests like ntp in this case from that interface on the SRX.  And then make sure that the zone the interface belongs to has the host inbound traffic setup to accept the response.

     



  • 13.  RE: SRX Cluster Console Ping No route to host

    Posted 09-14-2018 07:19

    Steve thank you for your responses. As you noticed and mentioned about my NAT using the public interface, that is correct. I did originally have NTP and DNS sourced from the same public interface and I was still unabled to get it to work with that config. 

     

    I did open a ticket with JTAC and they said that NTP and other services will always originate from the default routing instance and because fxp0 is the only interface in that routing instance, it will be sourced from that interface. I believe the exception is the syslog traffic which uses a revenue port that you specify as the source interface. 

     

    I also found on the forum a configuration example from an SRX 1400 cluster in a similar scenario in which they had all revenue ports in a custom routing instance and using fxp0 in the default routing instance. Same setup as me and they were able to get all services working. https://forums.juniper.net/t5/SRX-Services-Gateway/SRX-Cluster-without-using-fxp0/td-p/202469

     

    Based on all of this information I have gotten it to work. I configured a backup-router for each node as well as configured a default route in the default routing instance to use the backup router. This backup router was configured in the same subnet as our management fxp0 interfaces. The backup router then has it's own default route back to a revenue port on the SRX

     

     

     



  • 14.  RE: SRX Cluster Console Ping No route to host

    Posted 09-15-2018 05:28

    Thanks for the update.  Interesting that the requirement for root source only is not documented clearly.  I have sourced RADIUS and syslog from routing instances in the past and assumed it would work the same for other services.