I need help in configuring chassis cluster on a SRX340 device. Would be greatly appreciated if someone can suggest.
Currently, a SRX340 is running in a production environment. Now I need to add another SRX and perform the clustering. What are the best steps:
- Configure the standby SRX with the cluster and node id and connect to running/active SRX and reboot? Will this procedure receive the config from a running/active SRX?
- Configure the standby SRX with cluster and node id, copy the config from a running SRX, paste into the standby SRX and connect together. Does this procedure require the reboot?
Out of above two which one is the recommended choice OR what are the best steps that juniper recommends for adding the standby device (in this case SRX) and perform the clustering?
Also for SRX340, are there any specific ports that I need to connect the cables for control and fabric (data) links? I know for the data link, GE port is good but not sure about whether the SRX340 has a specific GE port assigned for fabric (data) link.
Thank you for your time and support.
Following link does define the interfaces used for clustering with a pictorial diagram too.
Further to talk about best way to convert a standalone to a cluster.
Standalone SRX differs from a cluster in following terms:
#1 Interface name: For SRX340 and SRX345 devices, the ge-0/0/1 interface on node 1 changes to ge-5/0/1.
#2 With cluster you would require to configure "reth" interfaces comprising of one or more interfaces from each nodes
#3 group configuration defining the management and system configuration
My best bet to accomplish a Production Standalone to Production Cluster would be as follows:
#1 Prepare new node1 as cluster by first enabling cluster mode and reboot
#2 Prepare the configuration on node1 for a cluster (apply-groups, reth, chassis cluster config) and the remaining rpouting/policies can be right away copied.
#3 Disable the interface on node1 and bring it live in the network.
#4 Cutover the traffic by disabling interfaces on production standalone and enabling interfaces on node1 cluster
#5 upon traffic migration to node1, disconnect standalone from network (leaving management and console connected)
#6 Wipe off the configuration and enable cluster as node0 and reboot
#7 Halt the box, connect control and Fab links leaving revenue cables still disconnected
#8 let the device boot up join cluster and sync configuration from node1 primary.
#9 Once cluster stablizes, connect the revenue cable left disconnected in step 7
Note: Sample configuration is given in the link above for reference.
The procedure seems lengthy but would safegaurd minimum downtime.
Hope this clarifies and helps.
Thank you for providing a solution. It's helpful.
So basically, there would be failover testing between node 0 and node 1. First, node 1 becomes primary when I take it to live in the network and then node 0 after reboot, right?
Would it be possible in reverse order? I mean, leave the node 0 as it is. Connects the node 1 (assume already prepared the configuration on node 1 for a cluster) to node 0, reboot and once a cluster established, disable the interface on node 0. In this way node 1 becomes primary and vice-versa.
Thank you for your time, appreciated!
Pleasse find answers inline:-
[CP] :- So basically, there would be failover testing between node 0 and node 1. First, node 1 becomes primary when I take it to live in the network and then node 0 after reboot, right?
[Juniper] :- Yes your understanding here is correct.
[CP] :- Would it be possible in reverse order? I mean, leave the node 0 as it is. Connects the node 1 (assume already prepared the configuration on node 1 for a cluster) to node 0, reboot and once a cluster established, disable the interface on node 0. In this way node 1 becomes primary and vice-versa.
[Juniper] :- No, not possible other way round, unless you are fine for a traffic disruption. [Methodology I reverted with is written towards minimal downtime]
Reason :- Since node0 is the present running standalone.
It would require cluster config and a reboot to become node0.
Hence, New member can be made in cluster with cluster config etc and be ready to take and server traffic.
Upon traffic cutover from one Standalone to new Node1, standalone can be worked upon for the chassis cluster makeover.
Once standalone is ready as a node0 pair, steps as defined in my last update can be followed to make it part of cluster pair.
Note: I have called the new node as Node1 for an understanding viewpoint and the already existing as future node0. Who will be 0/1 is your wish.
I hope this clarifies the remaining doubts. Do revert for any further clarifications.
Thanks for the clarification, appreciated!
I have followed the below steps and it performed the clustering:
-set chassis cluster cluster-id and node id reboot (node 1)
- Once the firewall (node 1) booted and back up, replaced the config on the firewall (node 1) with the one on the node 0 (currently active) firewall, After this, shut down the firewall (node 1) and taken it to the site.- At the site, mounted the firewall, leave it powered off. Then connected the HA cables, Port 1 of node 0 to port 1 of node 1. Port 2 of node 0 to Port 2 of node 1- And then powered on node 1
It performed the clustering without rebooting however, I haven't gone through the failover process as you mentioned. It's important to do the failover testing but due to time constraint, I have planned for the next time.
Your comments and previous post really useful and contains all process from clustering to the failover test. It definitely helps me.
Thank you for your time.