SRX

 View Only
last person joined: 22 hours ago 

Ask questions and share experiences about the SRX Series, vSRX, and cSRX.
  • 1.  SRX240 fxp0 management interfaces not working!!

    Posted 02-13-2011 11:37

    Hi,

    I've two srx240's in a cluster and I read that the interface ge-0/0/0 becomes the management interface in cluster mode i.e. fxp0

     

    So I configured as follows:

     

    set groups node0 system host-name f1-sou1
    set groups node0 interfaces fxp0 unit 0 family inet address 10.26.4.2/25
    set groups node1 system host-name f2-sou1
    set groups node1 interfaces fxp0 unit 0 family inet address 10.26.4.3/25

     

    Trouble is :

     

    1.I can only ping the first IP address but not the second from the LAN.

     

    2. I cannot ping either of these from the WAN.

     

    3. I cannot SSH to either of these ip addresses from LAN nor WAN.

     

    Am I missing anything here?

     

    Thanks,

     

    Paul

     

     



  • 2.  RE: SRX240 fxp0 management interfaces not working!!

    Posted 02-13-2011 12:47

    How have you connected fxp0 to switch?

    Where are you trying to access fxp0 from?

    Hint: make firewall filter (dst-ip fxp0), apply them to interface and check where packets goes ...

    I tried to find the same solution and in the last I did mgmt via "virtual-chassis" (KB 18228) Btw. you need to set the backup-router at least



  • 3.  RE: SRX240 fxp0 management interfaces not working!!
    Best Answer

    Posted 02-13-2011 13:07

    Hi Paul,

     

    You've discovered one of the more aggravating "features" of the SRX products.  The fxp0 interfaces become "out of band" management, and I use the quotes because Juniper has a very different opinion of what "out of band" means than many other manufacturers and customers.  Personally I think it's an incredibly impractical way to do management, and I don't even use fxp0 interfaces on my clusters because I can't stand the way Juniper thinks they should work.

     

    Basically, you need to have a completely separate management network (in other words, your 10.26.4.0/25 network needs to be separate from any kind of possible transit traffic through the SRX and routed one hop up from the SRX cluster).  If your management PC does not live on that same network, you need to configure specific "backup-router" statements in the groups config.  For example, if your PC is 192.168.0.5, you might have something like this:

     

     

    groups {
        node0 {
            system {
                host-name f1-sou1;
    	    backup-router 10.24.4.126 destination 192.168.1.0/24;
            }
            interfaces {
                fxp0 {
                    unit 0 {
                        family inet {
                            address 10.26.4.2/25;
                        }
                    }
                }
            }
        }
        node1 {
            system {
                host-name f1-sou2;
                backup-router 10.26.4.126 destination 192.168.1.0/24;
            }
            interfaces {
                fxp0 {
                    unit 0 {
                        family inet {
                            address 10.26.4.3/25;
                        }
                    }
                }
            }
        }
    }
    apply-groups "${node}";

    You'll also want to make sure you have system management services enabled on your fxp0 interface, if you plan to use that interface for ssh or web management:

     

     

    system {
        services {
            ssh;
            web-management {
                http {
                    interface fxp0;
                }
                https {
                    system-generated-certificate;
                    interface fxp0;
                }
            }
        }
    }

    Another option is to use Virtual Chassis mode for the SRX.

     

    http://kb.juniper.net/InfoCenter/index?page=content&id=KB18228&smlogin=true

     

    You might find that this works a little nicer than fighting with the fxp0 nonsense.  You can log into the active node via the reth interface.  If you need to access the secondary node, you can use "request routing-engine login node 1."  This is how I do my management of SRX clusters.

     



  • 4.  RE: SRX240 fxp0 management interfaces not working!!

    Posted 02-13-2011 15:51

    Hey guys,

    thanks for the great replies. So I added some config as you suggested and now it looks like this;

     

    set groups node0 system host-name f1-sou1
    set groups node0 system backup-router 10.26.4.125
    set groups node0 system backup-router destination 0.0.0.0/0
    set groups node0 interfaces fxp0 unit 0 family inet address 10.26.4.2/25
    set groups node1 system host-name f2-sou1
    set groups node1 system backup-router 10.26.4.125
    set groups node1 system backup-router destination 0.0.0.0/0
    set groups node1 interfaces fxp0 unit 0 family inet address 10.26.4.3/25

     

     

    I can ssh to both fxp0 interfeaces now from the LAN 🙂 but not from the wan 😞

     

    Any ideas? My head is melted from this.

     

    Thanks again,

     

    Paul



  • 5.  RE: SRX240 fxp0 management interfaces not working!!

    Posted 02-13-2011 16:30

    Crikey!!!

     

    I figured it out.

     

    When I added 10.26.4.0/25 addresses to fxp0, I lost access to mgmt vlan from the wan.

     

    My mgmt Vlan/ subnet is behind the firewall, to get to it via the WAN you must traverse the SRX.

     

    So the 10.26.4.0/25 network was being seen as a connected route on fxp0, therefore it would send all mgmt traffic out fxp0 eventhough my mgmt vlan was connected behind reth2.

     

    Once I disabled fxp0 I was again able to access mgmt devices behind the firewall.

     

     

    REally weird set up of the mgmt interface have juniper!!!!

     

     

     



  • 6.  RE: SRX240 fxp0 management interfaces not working!!

    Posted 02-14-2011 08:13

    So basically I removed the IP addresses from the fxp0 interfaces and I could again contact other devices on the management vlan from the WAN.

     

    The whole fxp0 addressing seems alogical.

     

    Paul



  • 7.  RE: SRX240 fxp0 management interfaces not working!!

    Posted 02-14-2011 09:16

     


    @paulkil wrote:

    Crikey!!!

    ....

    So the 10.26.4.0/25 network was being seen as a connected route on fxp0, therefore it would send all mgmt traffic out fxp0 eventhough my mgmt vlan was connected behind reth2.

    ....

    REally weird set up of the mgmt interface have juniper!!!!


    Exactly... 🙂  That's why I mentioned that you needed to put your fxp0 addresses into a completely separate subnet that doesn't transit the SRX.

     

    It still baffles me how Juniper can call a management interface "out of band" when it will route traffic out that interface for transit traffic.  That is exactly the opposite of "out of band," Juniper!

     



  • 8.  RE: SRX240 fxp0 management interfaces not working!!

    Posted 04-24-2011 22:20

    keithr,

     

    I'll need your help with this.

    The 'accepted solution' doesn't work for me.  And I have got the exact same scenario as what's mentioned in this thread.

    Except, i'm using the awesome added benefit's that 11.1R1.10 brings, with regards to the switching capabilites on a cluster.  (No more reth's.  I just have ae interfaces.  It works great ! 🙂 )

     

    Pretty simple....

     

    I have a management subnet (10.12.5.x /24).  And a bunch of prefixes switched to the firewall, originating downstream, from an EX4200 VC.  The prefixes sent to the firewall are DMZ based prefixes that need FW policies applied.  All other 'locally significant' prefixes switch and route on the VC and end there.  The Management subnet is one of these. 

     

    I.e. A bunch of vlans (Management, Clients, Printers, etc) that switch across the VC and terminate via RVI's on the switch, and then a bunch of vlans (FW-Untrust, FW-DMZ-Corp, FW-DMZ-Public, FW-DMZ-Public-Int, etc) whose RVI's terminate, upstream, on the FW.  ae0 and ae1 feeding LACP enabled trunks interconnect the VC and SRX cluster.

     

    The Ge-0/0/0 (and Ge-9/0/0 (SRX650) on Node1) from each SRX member, physically, have cables that plug into ports on the VC, that are mapped to the management subnet, via Vlan 5.  Now, I read that Ge-0/0/0 turns into FXP0 once a cluster is created between two members.  Is that correct ?

     

    Through your config, I created 10.12.5.2 & 10.12.5.3 on the FXP0 interfaces for each SRX node (Node 0 & Node 1), via the,

    set groups node0 interfaces fxp0 unit 0 family inet address 10.12.5.2/24

    set groups node1 interfaces fxp0 unit 0 family inet address 10.12.5.3/24

    commands.

    And the 'backup router' statements with the 'destination' parameters,

    i.e.,

    backup-router 10.12.5.1 (RVI address on VC) destination 10.12.12.1/24 (prefix of 'Clients' on VC)

     

    ----------------------------------------------------------------------------------------------------------------------------------------------------------------

    To be clear, if you've understand the above methodology, the 'Management' prefix/vlan, DOES NOT transit (not included in the trunk list on ae0 and ae1 from VC to SRX cluster) the SRX cluster, so I'm hopefully satisfying the requirements of trying to use the FXP0 interfaces 'Out of Band'.

    ----------------------------------------------------------------------------------------------------------------------------------------------------------------

     

    So, like, paulkil here, I can ping each IP address from a 'Clients' vlan (10.12.12.x /24), but can't SSH, HTTPS to it.

     

    The provided config doesn't help, and, from my undestanding I should be able to manage the SRX cluster on 10.12.5.2 when Node0 is active, and on 10.12.5.3 when Node1 is active, in this way, utilising this 'Out of Band' method with the Ge-0/0/0/Ge-9/0/0/FXP0 interface.



  • 9.  RE: SRX240 fxp0 management interfaces not working!!

    Posted 01-04-2019 11:46

     I know this is a very old post, all I was looking for was the ability to log into the secondary node for troubleshooting. We have a pair of SRX240H2's in a cluster. We had an issue on Monday where it would have been much quicker to use the command "request routing-engine login node 1".

     

    THANKS for that command. I love our Juniper equipment, but sometimes, it is difficult to find a way around issues, especially with our SRX's. I like our EX switches much better.

     

    Steve