Blog Viewer

Industrial SRX mk1 (Project Taco)

By Karel Hendrych posted 27 days ago

  

Industrial SRX mk1 (project Taco)

Let's expand on the article on vSRX on mini-PC with details on another platform and use case. This time, the Juniper vSRX is deployed on a specific fanless, rugged, DIN-mountable, and DC-powered PC for industrial applications, featuring plenty of Ethernet interfaces and 4G/5G connectivity, effectively making it an “Industrial SRX.”

vSRX Industrial

Introduction

It is challenging to satisfy all customers in industrial secure networking applications with vendor appliances, as their needs can vary significantly across multiple dimensions—environmental attributes, performance, interface types and counts, storage, and cost.

Therefore, the vSRX, with a lightweight software stack running on a suitable hardware platform, has been chosen for specific projects.

Motivating factors for custom industrial SRX

Here is a summarized list of motivating factors for using vSRX on an industrial PC:

  • Environmental hardening for demanding conditions, including temperature and humidity
  • Physical dimensions
  • DIN-mountable, DC-powered form factor
  • Custom platform resiliency with automatic vSRX recovery (non-bootable, PFE down)
  • Rich offering of Ethernet interfaces
  • Integrated 4G/5G for primary or backup connectivity
  • Performance of modern x86 CPUs and SSD storage for high performance and quick boot times
  • Low power consumption
  • Reduced time to market for new hardware appliances

Particular Industrial HW Setup Overview 

Tested hardware specification of ASRock Industrial iEP-5020G (5LAN-5G SKU):

  • 4-core Intel Atom x7433RE CPU
  • Support for in-band memory ECC
  • 1x 16GB SO-DIMM (Kingston 9905790-159.A01G)
  • 1x m.2 Crucial CT1000P3SSD8 NVMe SSD
  • 4x Intel I210 1GE interface
  • 1x Intel I226 1GE/2.5GE interface 
  • 1x m.2 Quectel RM520N-GL 4G/5G modem 

For details about hardware, environmental conditions, powering, and other specifications, please refer to the datasheet. For both hardware and software details, the output from the brilliant inxi tool (inxi -F -v7) is provided in Appendix 1.

Using the 4C CPU Resources

This serves as a preview for more detailed coverage in the next chapter:

  • The 1st core is assigned to the host OS, NIC interrupt handling, and Linux soft bridge processing
  • The 2nd to 4th cores are excluded from the Linux scheduler and interrupt handling
  • The 2nd and 3rd cores are assigned to the vSRX RE cores
  • The 4th core is assigned to the vSRX PFE
  • Depending on the use case, the vSRX layout can be adjusted to include 1 RE core and 2 PFE cores
  • The Turbo CPU clock is disabled shortly after boot: 
    • The nominal clock is 1.5 GHz (at 49°C/122°F in normal ambient conditions) when the vSRX PFE thread is running, with other threads idling
    • Otherwise, the sustainable turbo clock can reach a maximum of 3.4 GHz (at about 80°C/176°F) when managed by thermald
  • When needed, the CPU clock can be manually controlled within a range of 0.8 to 3.4 GHz

Networking Ports

Label on chassis Linux Device Bus info Type
LAN1 enp1s0 PCI 0000:01:00.0 I226
LAN2 enp9s0 PCI 0000:09:00.0 I210
LAN3 enp8s0 PCI 0000:08:00.0 I210
LAN4 enp7s0 PCI 0000:07:00.0 I210
LAN5 enp6s0 PCI 0000:06:00.0 I210
ANT1-4 random* USB  4:1   Quectel RM520N-GL 4G/5G

* The WWAN device appears as a random identifier during reboots. For practical usability, systemd renames the device, managed by the specific kernel module, to 'wwangw'.

Sample layout:

  • The LAN1 port is dedicated to an external bridge hosting the vSRX WAN uplink interface
  • The LAN2 port is bound to a bridge that hosts the vSRX fxp0 interface and provides host OS platform management (IP address assigned to Linux bridge)
  • The LAN3 port is used for an internal bridge that passes 802.1Q VLAN tags to the vSRX, avoiding the need for ge-0/0/x interfaces on the vSRX for every network (which could lead to scaling issues)
  • The WWAN interface acts as a next-hop router/NAT, mapped to a bridge where the relevant vSRX interface uses the device as the default gateway for a separate routing instance
  • Generally, VirtIO NICs bound to regular Linux bridges are likely the best choice, as there are no capabilities or resources for any accelerated networking options with non-server-grade Intel NICs.

4G/5G WWAN Module Overview

The shipped Quectel module has proven effective in the mode of a next-hop router/NAT. The vSRX is leasing an IPv4 address using DHCP from the 192.168.225.0/24 range. The specific vSRX interface bridged to the Linux bridge hosting the 4G/5G module would typically be bound to a separate routing instance with its own default gateway. A practical use case would involve two IPsec tunnels—one via WAN and the other via 4G/5G WWAN—with BGP/BFD within the tunnels to fail over to WWAN in the event of a WAN outage.

Switching Capabilities Add-on

The vSRX implementation is limited by two main factors:

  • 1. The hardware platform it is running on.
  • 2. The underlying hypervisor.

These limitations dictate the available use cases, specifically the lack of Layer 2 features. To address these limitations, the EX4100-H-12 switch has been added, as it provides the missing features that the standalone box lacks. The most important features include:

  • PoE++ (with up to 90W per port)
  • L2 security
  • Switching and L2 protocols (STP/LACP)
  • IoT sensors

With the EX4100-H-12 and the vSRX, we can create the well-known setup known as "router on a stick," where the vSRX functions as a router-firewall and the EX4100-H-12 acts as an L2 or L3 switch. This combination offers full L2 to L7 capabilities in a ruggedized, DIN-mountable form factor.

Another advantage of using the EX4100-H-12 is the supplied power supply unit (PSU), which, at 360W, can provide enough power for both components of this solution. The PSU is also DIN-mountable and comes in both AC and DC variants.

This article does not aim to cover all the configurations of the EX series switch. All details can be found on the Juniper website dedicated to the EX4100 platforms.

Linux/KVM Host Settings Details

Storage Filesystem

Although the industrial PC has only one physical SSD drive, resiliency can be increased by storing two copies of data. Since the use case is not I/O intensive but rather focused on reliability, the following steps can be taken when the system is installed on LVM (recommended) and there is available space in the LVM Volume Group:

lvcreate vsrx-h-vg --size 200g --name storage
mkfs.btrfs -d dup -m dup -L storage /dev/vsrx-h-vg/storage

The first command creates a 200 GiB LVM Logical Volume on a Volume Group called vsrx-h-vg with the label storage. The subsequent mkfs.btrfs command, using the dup profile of the BTRFS filesystem, ensures that two copies of data and metadata are stored within the same physical device. In the event of I/O errors during reads (CRC checks), the copy would be used, and any data with a wrong checksum would be replaced. Naturally, the storage utilized is twice as much compared to having no data duplication; the device size is 200 GiB; however, the df command correctly reports 100 GiB:

# btrfs dev usage /mnt/storage/
/dev/mapper/vsrx--h--vg-storage, ID: 1
   Device size:           200.00GiB
   Device slack:              0.00B
   Data,DUP:                6.00GiB
   Metadata,DUP:            2.00GiB
   System,DUP:             64.00MiB
   Unallocated:           191.94GiB
# df -h /mnt/storage/
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/vsrx--h--vg-storage  100G  1.7G   98G   2% /mnt/storage

Startup Script 

#!/bin/bash
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# minimize swap use on SSD
echo 1 > /proc/sys/vm/swappiness
# disable turbo after 3 minutes
( sleep 180; echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo ) &
# wwan setup
# flip mode if cdc_ether mode doesnt exists, interface renamed by: 
# /etc/systemd/network/10-wwangw.link  
# use default APN and try to extract APN from saved rescue config
APN=internet
CONFIG=/root/scripts/vsrx_recovery_rescue_config.txt
if [ -f $CONFIG ]; then
  if grep -q wwan-apn $CONFIG; then 
    APN=$( grep wwan-apn $CONFIG | awk '{ print $2 }' | cut -f 1 -d";" )
  fi  
fi
echo -e "AT+CGDCONT=1,\"ip\",\"$APN\"\r\n" > /dev/ttyUSB2
sleep 10
while ! ip link sh wwangw &> /dev/null; do
  echo -e 'AT\r\n' > /dev/ttyUSB2 
  sleep 3
  echo -e 'AT+QCFG="usbnet",1\r\n' > /dev/ttyUSB2 
  sleep 3
  echo -e "AT+CGDCONT=1,\"ip\",\"$APN\"\r\n" > /dev/ttyUSB2
  sleep 3
  echo -e 'AT+CFUN=1,1\r\n' > /dev/ttyUSB2
  sleep 40
done
ip link set wwangw up
ip link set wwangw master br-wwan

An important part of the platform setup is the systemd-udevd rule that maintains a consistent name for the Wireless 4G/5G network device by matching the driver name. Below are the contents of the sample /etc/systemd/network/10-wwangw.link:

[Match]
Driver=cdc_ether
[Link]
Name=wwangw

Breakdown of the startup script:

  • Adjusts the host OS swap behavior to reduce SSD wear
  • Disables the turbo clock 3 minutes after boot (leveraging full CPU potential during vSRX boot)
  • Extracts the mobile APN name from the vSRX recovery configuration. Junos syntax
set apply-macro wwan wwan-apn internet
  • Establishes 4G/5G connectivity, if the wwangw interface is not found, attempts to switch the WWAN card into usbnet mode, effectively turning the wireless card into a router/NAT with an internal IP and an ISP-allocated IP address, and attempts to connect
  • Finally, attaches the wwangw interface to the br-wan Linux bridge

Startup Script at Boot 

As mentioned in the original article, an easy way to manage the script is to place it (along with chmod 755 [file]) in a directory such as /root/scripts and use an @reboot crontab event. This approach keeps everything in one place while the setup is being tuned and can help with navigation. Sample crontab along with periodically initiated vSRX recovery script covered in the vSRX Recovery scripts chapter.

@reboot /root/scripts/startup.sh
*/10 * * * * cd /root/scripts && ./vsrx_recovery_periodic.sh

When performing non-systematic tasks, a message in /etc/motd should serve as a reminder during every login.

Linux Bridge Networking Layout 

The schematics below provide a more detailed description of the networking layout discussed earlier, with the proposed vSRX zones highlighted in green.

  • The enp1s0 host interface connects to the br-wan bridge without VLAN tagging
  • The wwangw interface, representing the 4G/5G uplink, is bound to the br-wwan bridge
  • The enp9s0 interface connects to the br-mgmt bridge for host OS management access and vSRX fxp0 out-of-band management interface access
  • The enp8s0 interface connects to the br-trunk, passing VLAN-tagged traffic to the corresponding units on the vSRX side, including the trust and vSRX PFE IP gateway on the management segment (the default gateway for the host OS and fxp0)
Figure 1 - Sample layout of host OS and vSRX networking bindings

Figure 1 - Sample layout of host OS and vSRX networking bindings

On Debian Linux, networking configuration is managed in the /etc/network/interfaces file. Below is a sample setup of bridges for trust, wan/wwan, and management (the complete sample is available in Appendix 2):

br-wan 

An external Linux bridge connects the vSRX WAN interface ge-0/0/1 with the host physical interface enp1s0:

auto enp1s0 br-wan
iface br-wan inet manual
        bridge_ports enp1s0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        bridge_hw enp1s0
        post-up echo 1 > /sys/class/net/br-wan/bridge/vlan_filtering
        post-up bridge vlan del dev br-wan vid 1 self
  • The above creates the br-wan bridge for the vSRX WAN interface
  • It bridges the host enp1s0 physical NIC
  • STP is disabled, with no delay before bringing it up
  • It isolates the host from IP processing

br-wwan

An external Linux bridge connects the vSRX ge-0/0/2 interface with the host interface wwangw, which represents the 4G/5G card.

auto br-wwan
iface br-wwan inet manual
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        post-up echo 1 > /sys/class/net/br-wwan/bridge/vlan_filtering
        post-up bridge vlan del dev br-wwan vid 1 self
  • The above creates the br-wwan bridge for the vSRX interface ge-0/0/2.0
  • The wwangw interface representing the 4G/5G module is bridged by the startup script.
  • STP is disabled, with no delay before bringing it up
  • It isolates the host from IP processing

br-mgmt

The br-mgmt Linux bridge hosts the fxp0 interface and provides IP management for the host OS to the enp9s0 interface

auto enp9s0 br-mgmt
iface br-mgmt inet static
        address 192.168.100.120/24
        gateway 192.168.100.1
        dns-nameservers 10.0.0.10
        bridge_ports enp9s0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        bridge_hw enp9s0
        up ip rou add 192.168.6.0/24 via 192.168.100.1
  • Creates the br-mgmt bridge for hosting the fxp0 interface
  • Assigns the host IP address, default gateway (vSRX), and name servers.
  • Disables STP, with no delay before bringing it up
  • VLAN filtering is not applicable here

br-trunk 

An internal Linux bridge is used for passing VLAN-tagged traffic from the switch via enp8s0 to the vSRX ge-0/0/0 interface (with no explicit filtering, as the switch handles it).

auto enp8s0 br-trunk
iface br-trunk inet manual
        bridge_ports enp8s0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        bridge_hw enp8s0
        post-up bridge vlan del dev br-trunk vid 1 self
  • Creates the br-trunk bridge for the vSRX interface using tagging from within the VM
  • Bridges the host enp8s0 physical NIC
  • Disables STP, with no delay before bringing it up
  • Isolates the host from potential IP processing. No explicit VLAN filtering is applied, which is acceptable if the adjacent switch is under control

Revealing CPU Layout Details

Based on lscpu -e command output below: 

CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE    MAXMHZ   MINMHZ       MHZ
  0    0      0    0 0:0:0:0          yes 3400.0000 800.0000 1500.0110
  1    0      0    1 1:1:0:0          yes 3400.0000 800.0000 1499.9640
  2    0      0    2 2:2:0:0          yes 3400.0000 800.0000 1499.9170
  3    0      0    3 3:3:0:0          yes 3400.0000 800.0000 1500.0120

The proposed non-HT capable CPU resources split in detail: 

Figure 2 - Sample split of CPU resources between tasks

Figure 2 - Sample split of CPU resources between tasks

Tuning Host OS 

To achieve the above resource split, the vSRX RE and PFE CPUs need to be isolated from scheduling by tuning kernel parameters in /etc/default/grub (or an equivalent file), followed by running update-grub to reflect the changes for the next boot:

GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=1-3 rcu_nocbs=1-3 irqaffinity=0 mitigations=off default_hugepagesz=1GB hugepagesz=1G hugepages=8 transparent_hugepage=never console=tty0 console=ttyS0,9600n8 quiet ipv6.disable=1"

Breakdown of kernel parameters:

  • isolcpus - remove CPU 1 to 3 from OS balancing and scheduling
  • rcu_nocbs - removes the CPU 1 to 3 from RCU processing
  • irqaffinity – configures default IRQ affinity to CPU 0 (can be configured also run-time, in /proc/irq/… )
  • mitigations - disables all the performance-impacting in-kernel mitigations for CPU vulnerabilities, 
    • not that relevant for single-tenant network appliance
    • as side channel attacks are possible only if someone compromised the host/vSRX and can run an application 
    • for those concerned, keep enabled by removing the parameter along with up-to-date Linux kernel and Intel/AMD CPU microcode packages
  • hugepages – prevents memory fragmentation and increases performance for DPDK applications, 8x 1GB hugepages on host with 16GB RAM
  • Analyzing output below on given HW to identify devices causing most interrupts on CPU cores where to vSRX is pinned, before and after reboot (complete picture may appear over time, NICs are a good start). After adjustments, interrupts get handled by CPU0 only:
# cat /proc/interrupts 
            CPU0       CPU1       CPU2       CPU3
 139:          0          0          0          0  IR-PCI-MSI 3145728-edge      enp6s0
 140:      60893          0          0          0  IR-PCI-MSI 3145729-edge      enp6s0-TxRx-0
 141:      60893          0          0          0  IR-PCI-MSI 3145730-edge      enp6s0-TxRx-1
 142:      60893          0          0          0  IR-PCI-MSI 3145731-edge      enp6s0-TxRx-2
 143:      60893          0          0          0  IR-PCI-MSI 3145732-edge      enp6s0-TxRx-3
 144:          0          0          0          0  IR-PCI-MSI 3670016-edge      enp7s0
 145:      60893          0          0          0  IR-PCI-MSI 3670017-edge      enp7s0-TxRx-0
 146:      60893          0          0          0  IR-PCI-MSI 3670018-edge      enp7s0-TxRx-1
 147:      60893          0          0          0  IR-PCI-MSI 3670019-edge      enp7s0-TxRx-2
 148:      60893          0          0          0  IR-PCI-MSI 3670020-edge      enp7s0-TxRx-3
<SNIP>

Classic ext4 filesystem settings (where the host OS is installed) are specified in /etc/fstab, similar to the settings for the BTRFS filesystem located at /mnt/storage for VM storage.

/dev/mapper/vg01-root / ext4 errors=remount-ro,noatime,nodiratime,discard 0 1
/dev/mapper/vsrx--h--vg-storage /mnt/storage      btrfs   discard,defaults,noatime,nodiratime        0       2
  • discard informs SSD device which block can be trimmed internally
  • noatime, nodiratime relieve SSD from some more writes

vSRX Recovery scripts

The purpose of the recovery scripts is to enhance the resiliency of the vSRX by enabling the ability to recover the VM from a pristine VM disk image in the event of various defects, as long as the host OS remains operational. The following scenarios are covered:

  • Inability of the host OS to connect to fxp0 (5 consecutive attempts with a 40-second delay).
  • PFE not coming online (5 consecutive checks with a 30-second delay).
  • Can also be used for upgrades by starting over with a newer version and importing the configuration. The license can be included in the configuration file.

The recovery script, as shown in the crontab listing in the Startup Script section, is executed every 10 minutes, as no recovery process should take longer than 10 minutes to prevent concurrent runs.

*/10 * * * * cd /root/scripts && ./vsrx_recovery_periodic.sh

The /root/scripts folder contains not only startup.sh but also the following contents:

-rwxr-xr-x 1 root root 1219 May 18 01:39 startup.sh
-rw-r--r-- 1 root root   35 May 17 11:45 vsrx_recovery_default-image.txt
-rwxr-xr-x 1 root root 1938 May 18 11:45 vsrx_recovery_periodic.sh
-rwxr-xr-x 1 root root 1998 May 18 12:01 vsrx_recovery_redeploy.sh
-rw------- 1 root root 8063 Feb  5 15:47 vsrx_recovery_rescue_config.txt_default
-rw------- 1 root root 8063 May 18 12:33 vsrx_recovery_rescue_config.txt
  • vsrx_recovery_default-image.txt  - contains name of vSRX3 QCOW2 in /root/install folder which gets re-deployed during recovery process
  • vsrx_recovery_periodic.sh, the periodically executed script by crond
  • vsrx_recovery_redeploy.sh - the re-deploy script itself
  • vsrx_recovery_rescue_config.txt – configuration extracted regularly from vSRX rescue configuration, if not present, this file is removed
  • vsrx_recovery_rescue_config.txt_default – default configuration if there is no vsrx_recovery_rescue_config.txt saved

The scripts assume that a record for vsrx (fxp0 IP address) exists in the host OS's /etc/hosts file.

vsrx_recovery_periodic.sh

#!/bin/bash
# khendrych@juniper.net
#set -x
connect_retry=0
pfe_online_retry=0
tmp=$( mktemp )
retries=5
while true; do
  if ssh -o ConnectTimeout=20 vsrx cli "show system configuration rescue" > $tmp 2>/dev/null; then
    if grep -q "No rescue configuration is set" $tmp; then  
      logger "$0: No rescue configuration is set"           
      if [ -f vsrx_recovery_rescue_config.txt ]; then
        logger "$0: Deleting local non-default rescue config"       
        rm vsrx_recovery_rescue_config.txt
      fi
    elif [ ! -f ./vsrx_recovery_rescue_config.txt ]; then
      cp $tmp ./vsrx_recovery_rescue_config.txt   
      logger "$0: Saving initial rescue configuration from vSRX"            
    else
      if ! diff ./vsrx_recovery_rescue_config.txt $tmp; then
        cp $tmp ./vsrx_recovery_rescue_config.txt
        logger "$0: Saving rescue configuration - diff between saved and vSRX"      
      else
        logger "$0: No diff between saved rescue config and vSRX"           
      fi              
    fi  
    if ! ssh vsrx cli "show chassis fpc pic-status" | grep PIC | grep -q Online; then
      logger "$0: vSRX PFE down - starting monitoring loop, potential redeploy"
      while true; do
        if ! ssh vsrx cli "show chassis fpc pic-status" | grep PIC | grep -q Online; then
          logger "$0: vSRX PFE offline"
          ((pfe_online_retry++))
          if [ $pfe_online_retry -eq $retries ]; then
            logger "$0: Running vsrx_recovery_redeploy.sh - vSRX PFE offline"
             ./vsrx_recovery_redeploy.sh
            break 
          fi
        else
          logger "$0: vSRX PFE came online"
          break
        fi         
        sleep 30
      done
    fi      
    break
  else
    logger "$0: Can't establish SSH connection to vSRX"
  fi
  ((connect_retry++))
  if [ $connect_retry -eq $retries ]; then
    logger "$0: Running vsrx_recovery_redeploy.sh - no SSH connection"
    ./vsrx_recovery_redeploy.sh
    break
  fi
  sleep 40
done
test -e $tmp && rm $tmp

Breakdown of vsrx_recovery_periodic.sh:

  • Executes in a loop, making 5 attempts to connect to the vSRX fxp0 interface to collect the rescue configuration.
  • If successful, collects the recovery configuration.
  • If the recovery configuration is not present, deletes the non-default recovery configuration on the host, if it exists.
  • If the recovery configuration is present, the script either saves it directly if no host-side configuration exists, or saves it only if there is a difference between the recovery configuration on the vSRX and the one on the host.
  • In the next step, if the connection was successful, the PFE online status is checked in a monitoring loop. If it is not successful, a re-deploy is initiated.
  • Finally, if attempts to connect to fxp0 fail, the re-deploy is also initiated.

vsrx_recovery_redeploy.sh

The redeploy script leverages a vSRX feature that allows a specially crafted CD ISO image to be used for loading the default configuration.. 

#!/bin/bash
# khendrych@juniper.net
# set -x
# args - [junos qcow]
VM_PATH=/mnt/storage/kvm
INSTALL_PATH=/mnt/storage/install
JUNOS_DEF=$( cat vsrx_recovery_default-image.txt )
JUNOS_ARG=$1
ISO_FOLDER=vsrx_recovery_redeploy_iso
if [ -z $JUNOS_ARG ]; then
  JUNOS=$JUNOS_DEF
else  
  if [ -f $INSTALL_PATH/$JUNOS_ARG ]; then
    JUNOS=$JUNOS_ARG
  else
    echo "Junos image override - file does not exist"     
    logger "$0: Junos image override - file does not exist"
    exit 1 
  fi   
fi  
logger "$0: Enabling CPU turbo to speed up operations"
echo 0 > /sys/devices/system/cpu/intel_pstate/no_turbo
logger "$0: Destroying vSRX"
virsh -q destroy vsrx &>/dev/null
test -e $VMPATH/vsrx.qcow2 && rm $VMPATH/vsrx.qcow2 
test -e $VMPATH/vsrx.iso && rm $VMPATH/vsrx.iso 
test -e $ISO_FOLDER || mkdir $ISO_FOLDER
if [ -f vsrx_recovery_rescue_config.txt  ]; then
  logger "$0: Using saved rescue config"
  cp vsrx_recovery_rescue_config.txt $ISO_FOLDER/juniper.conf
else
  logger "$0: Using default rescue config"
  cp vsrx_recovery_rescue_config.txt_default $ISO_FOLDER/juniper.conf
fi  
mkisofs -quiet -l -o $VM_PATH/vsrx-init-config.iso $ISO_FOLDER/
cp $INSTALL_PATH/$JUNOS $VM_PATH/vsrx.qcow2
test -e $ISO_FOLDER && rm -rf $ISO_FOLDER
logger "$0: Starting vSRX"
virsh -q start vsrx
logger "$0: Waiting for vSRX to come up for 4 minutes "
sleep 240
logger "$0: SSH keys operation"
ssh-keygen -f ~/.ssh/known_hosts -R "vsrx" &> /dev/null
ssh-keyscan -t ssh-ed25519 vsrx 2>/dev/null >> /root/.ssh/known_hosts
sed -i '/root@vsrx/d' ~/.ssh/authorized_keys
ssh vsrx "rm /root/.ssh/id_ed25519*" &>/dev/null
sleep 1
ssh vsrx 'ssh-keygen -t ed25519 -f /root/.ssh/id_ed25519 -N ""' &>/dev/null
sleep 1
ssh vsrx cat /root/.ssh/id_ed25519.pub >> /root/.ssh/authorized_keys
sleep 1
logger "$0: Additional vSRX reboot"
ssh vsrx cli "request system reboot" &>/dev/null
( sleep 180; echo 1 > /sys/devices/system/cpu/intel_pstate/no_turbo; logger "$0: Disabling CPU turbo after recovery operation" ) &

Breakdown of vsrx_recovery_redeploy.sh:

  • If the vSRX Junos QCOW2 image is not provided as the first argument, use the default one specified in vsrx_recovery_default-image.txt (default for automated processes; explicit for manual execution to start over with specific Junos versions).
  • If the saved rescue configuration is not available, use the default configuration.
  • Build the ISO image with the configuration and start the vSRX.
  • Remove the host-side SSH key fingerprint from the known hosts.
  • Generate an SSH key pair on the vSRX for the purpose of SSH access from the vSRX to the host OS.
  • Execute an additional reboot, as the second RE core is not allocated during the first boot.
  • Finally, disable turbo clocking of the CPU, which is enabled at the beginning of the redeploy process.

A sample run recorded in the logs shows the occasion when fxp0 was not reachable for SSH. The logs at 12:50 are from a new periodic execution, notifying about the non-saved rescue configuration:

May 18 12:40:01 vsrx-h CRON[2600]: (root) CMD (cd /root/scripts && ./vsrx_recovery_periodic.sh)
May 18 12:40:20 vsrx-h root[2605]: ./vsrx_recovery_periodic.sh: Can't establish SSH connection to vSRX
May 18 12:41:03 vsrx-h root[2608]: ./vsrx_recovery_periodic.sh: Can't establish SSH connection to vSRX
May 18 12:41:46 vsrx-h root[2611]: ./vsrx_recovery_periodic.sh: Can't establish SSH connection to vSRX
May 18 12:42:29 vsrx-h root[2614]: ./vsrx_recovery_periodic.sh: Can't establish SSH connection to vSRX
May 18 12:43:12 vsrx-h root[2617]: ./vsrx_recovery_periodic.sh: Can't establish SSH connection to vSRX
May 18 12:43:12 vsrx-h root[2618]: ./vsrx_recovery_periodic.sh: Running vsrx_recovery_redeploy.sh - no SSH connection
May 18 12:43:12 vsrx-h root[2621]: ./vsrx_recovery_redeploy.sh: Enabling CPU turbo to speed up operations
May 18 12:43:12 vsrx-h root[2622]: ./vsrx_recovery_redeploy.sh: Destroying vSRX
May 18 12:43:13 vsrx-h root[2651]: ./vsrx_recovery_redeploy.sh: Using default rescue config
May 18 12:43:13 vsrx-h root[2656]: ./vsrx_recovery_redeploy.sh: Starting vSRX
May 18 12:43:15 vsrx-h root[2760]: ./vsrx_recovery_redeploy.sh: Waiting for vSRX to come up for 4 minutes
May 18 12:47:15 vsrx-h root[2777]: ./vsrx_recovery_redeploy.sh: SSH keys operation
May 18 12:47:19 vsrx-h root[2789]: ./vsrx_recovery_redeploy.sh: Additional vSRX reboot
May 18 12:50:01 vsrx-h CRON[2801]: (root) CMD (cd /root/scripts && ./vsrx_recovery_periodic.sh)
May 18 12:50:01 vsrx-h root[2806]: ./vsrx_recovery_periodic.sh: No rescue configuration is set
May 18 12:50:19 vsrx-h root[2791]: ./vsrx_recovery_redeploy.sh: Disabling CPU turbo after recovery operation

Next are the logs from when the routing engine is reachable via fxp0 and the rescue configuration is saved, but for some reason, the PFE is not online. As in the previous case, the 12:30 run is the first after recovery, notifying about the non-existing rescue configuration.

May 18 12:20:01 vsrx-h CRON[2294]: (root) CMD (cd /root/scripts && ./vsrx_recovery_periodic.sh)
May 18 12:20:02 vsrx-h root[2303]: ./vsrx_recovery_periodic.sh: No diff between saved rescue config and vSRX
May 18 12:20:02 vsrx-h root[2307]: ./vsrx_recovery_periodic.sh: vSRX PFE down - starting monitoring loop, potential redeploy
May 18 12:20:02 vsrx-h root[2311]: ./vsrx_recovery_periodic.sh: vSRX PFE offline
May 18 12:20:33 vsrx-h root[2318]: ./vsrx_recovery_periodic.sh: vSRX PFE offline
May 18 12:21:03 vsrx-h root[2326]: ./vsrx_recovery_periodic.sh: vSRX PFE offline
May 18 12:21:33 vsrx-h root[2333]: ./vsrx_recovery_periodic.sh: vSRX PFE offline
May 18 12:22:04 vsrx-h root[2339]: ./vsrx_recovery_periodic.sh: vSRX PFE offline
May 18 12:22:04 vsrx-h root[2340]: ./vsrx_recovery_periodic.sh: Running vsrx_recovery_redeploy.sh - vSRX PFE offline
May 18 12:22:04 vsrx-h root[2343]: ./vsrx_recovery_redeploy.sh: Enabling CPU turbo to speed up operations
May 18 12:22:04 vsrx-h root[2344]: ./vsrx_recovery_redeploy.sh: Destroying vSRX
May 18 12:22:04 vsrx-h root[2373]: ./vsrx_recovery_redeploy.sh: Using saved rescue config
May 18 12:22:04 vsrx-h root[2378]: ./vsrx_recovery_redeploy.sh: Starting vSRX
May 18 12:22:07 vsrx-h root[2481]: ./vsrx_recovery_redeploy.sh: Waiting for vSRX to come up for 4 minutes
May 18 12:26:07 vsrx-h root[2507]: ./vsrx_recovery_redeploy.sh: SSH keys operation
May 18 12:26:10 vsrx-h root[2519]: ./vsrx_recovery_redeploy.sh: Additional vSRX reboot
May 18 12:29:11 vsrx-h root[2521]: ./vsrx_recovery_redeploy.sh: Disabling CPU turbo after recovery operation
May 18 12:30:01 vsrx-h CRON[2542]: (root) CMD (cd /root/scripts && ./vsrx_recovery_periodic.sh)
May 18 12:30:01 vsrx-h root[2547]: ./vsrx_recovery_periodic.sh: No rescue configuration is set
May 18 12:30:01 vsrx-h root[2548]: ./vsrx_recovery_periodic.sh: Deleting local non-default rescue config

Host OS Firewall 

It is good practice to engage Linux packet filtering to protect the host OS management interface. The Linux firewall can be used for both stateless and stateful filtering on a bridge if certain traffic patterns need to be dropped before reaching vSRX. Below are sample nftables settings for host OS management protection, stored in /etc/nftables.conf:

#!/usr/sbin/nft -f
flush ruleset
table inet filter  {
        set MGMT {
                #permits SSH 
                type ipv4_addr
                counter
                flags interval
                elements = {
                        192.168.0.0/16,
                        10.0.0.0/8,
                }
        }         
        set NTPC {
                #permits NTP clients
                type ipv4_addr
                counter
                flags interval
                elements = {
                        192.168.100.215,
                }
        }         
        chain INPUT {
                type filter hook input priority filter; policy accept;
                ct state invalid counter drop
                ip protocol icmp counter limit rate 10/second accept
                ip protocol icmp counter drop
                ct state related,established counter accept
                ip saddr @MGMT ct state new tcp dport 22 counter log flags all prefix "MGMT " accept
                ip saddr @NTPC udp dport 123 counter accept
                iifname "lo" counter accept
                counter limit rate 10/second reject with icmpx type admin-prohibited
                counter drop
        }
        chain FORWARD {
                type filter hook forward priority filter; policy drop;
                counter drop
        }
        chain OUTPUT {
                type filter hook output priority filter; policy accept;
                counter accept
        }
}

Breakdown of athe bove nftables settings:

  • ICMPv4 is throttled to 10 packets/second 
  • Established connections are permitted
  • SSH can be established from specific IPv4 addresses stored in sets only, those can be expanded runtime using nft add/delete element syntax
  • NTP permitted from specific endpoint
  • All outbound traffic is permitted (sub-par)

Validation, application and listing of changes done in /etc/nftables.conf:

nft -c -f /etc/nftables.conf
systemctl reload nftables 
nft list ruleset

vSRX VM

vSRX VM Settings

  • Sample QEMU/KVM VM configuration for vSRX3 is provided in the text box below.
  • It is suitable for CTRL+A, CTRL+C, and placement into your favorite text editor, then into the VM configuration file at /etc/libvirt/qemu/vsrx.xml.
  • The important part is the section with CPU pinning, which effectively starts pinning the three vCPUs, with the 1st vCPU assigned to host CPU1 (numbering starts at 0).
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='2'/>
    <vcpupin vcpu='2' cpuset='3'/>
  </cputune>
  • The VM memory size must be aligned (≤) to the configured hugepages 
  • The vHDD image is placed in /mnt/storage/kvm/vsrx.qcow2.
  • The VM is configured with 3 CPUs and 5 GiB of vRAM.
<domain type='kvm'>
  <name>vsrx</name>
  <memory unit='KiB'>5242880</memory>
  <currentMemory unit='KiB'>5242880</currentMemory>
  <memoryBacking>
    <hugepages>
      <page size='1048576' unit='KiB'/>
    </hugepages>
    <nosharepages/>
    <locked/>
    <allocation mode='immediate'/>
  </memoryBacking>
  <vcpu placement='static'>3</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='1'/>
    <vcpupin vcpu='1' cpuset='2'/>
    <vcpupin vcpu='2' cpuset='3'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-3.1'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-model' check='partial'/>
  <clock offset='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/mnt/storage/kvm/vsrx.qcow2'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/storage/kvm/vsrx-init-config.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <source bridge='br-mgmt'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <source bridge='br-trunk'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <source bridge='br-wan'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </interface>
    <interface type='bridge'>
      <source bridge='br-wwan'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes'>
      <listen type='address'/>
    </graphics>
    <audio id='1' type='none'/>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </memballoon>
    <rng model='virtio'>
      <backend model='random'>/dev/random</backend>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </rng>
  </devices>
</domain>

Define VM from XML, set automatic start with host and start with serial console attached:

virsh define vsrx.xml
virsh autostart vrsx
virsh start vsrx --console

Sidenote - use virsh edit vsrx after vSRX VM has been defined, otherwise changes won’t be reflected.

To properly shut down vSRX upon platform power-off or reboot, the /etc/default/libvirt-guests file needs to be adjusted as follows:

ON_BOOT=start
ON_SHUTDOWN=shutdown
SHUTDOWN_TIMEOUT=120

A sample vSRX Junos configuration is present in Appendix 3.

Basic Performance Benchmark

  • Simple L4 FW+NAT test using one nuttcp TCP stream at 100k PPS (1150B TCP MSS)
  • The 100k rate is fixed to observe CPU load impact with CPU clocking change
  • 2 test cases:
    • w/wo enabled CPU turbo clock 
    • observed is reported load of PFE and host CPU core responsible for processing of networking
  • For revealing platform maximums professional equipment would have to be used 
Turbo Clock Ingress Mbps Ingress kPPS vSRX PFE Load (%) host-Os Core 0 Load (%) CPU Temp (°C)
No 960 100 16 80 52
Yes 960 100 9 40 84

Results of performance test in given setup

Conclusion

  • Lots of headroom on vSRX PFE core, e.g., for heavy L7 services
  • There is even more potential for the vSRX PFE after changing from one to two PFE cores
  • CPU turbo mode does provide a significant performance uplift when increased packet rates are expected

Appendix 1 – Complete inxi Output

System:
  Host: vsrx-h Kernel: 6.1.0-35-amd64 arch: x86_64 bits: 64 compiler: gcc v: 12.2.0
    Console: pty pts/2 Distro: Debian GNU/Linux 12 (bookworm)
Machine:
  Type: Desktop Mobo: ASRock model: iEP-5020G serial: N/A UEFI: American Megatrends LLC. v: P1.10
    date: 10/08/2024
Battery:
  Message: No system battery data found. Is one present?
Memory:
  RAM: total: 14.93 GiB used: 8.89 GiB (59.5%)
  Array-1: capacity: 128 GiB note: check slots: 1 EC: None max-module-size: 128 GiB note: est.
  Device-1: Controller0-ChannelA-DIMM0 type: DDR5 detail: synchronous size: 16 GiB
    speed: 4800 MT/s volts: 1.1 width (bits): data: 64 total: 64 manufacturer: Kingston
    part-no: 9905790-159.A01G serial: 8402AD3A
CPU:
  Info: quad core model: Intel Atom x7433RE bits: 64 type: MCP smt: <unsupported> arch: Alder Lake
    rev: 0 cache: L1: 384 KiB L2: 2 MiB L3: 6 MiB
  Speed (MHz): avg: 1500 min/max: 800/3400 volts: 1.0 V ext-clock: 100 MHz cores: 1: 1500
    2: 1500 3: 1500 4: 1500 bogomips: 11980
  Flags: 3dnowprefetch abm acpi adx aes aperfmperf apic arat arch_capabilities arch_lbr
    arch_perfmon art avx avx2 avx_vnni bmi1 bmi2 bts cat_l2 cdp_l2 clflush clflushopt clwb cmov
    constant_tsc cpuid cpuid_fault cx16 cx8 de ds_cpl dtes64 dtherm dts epb ept ept_ad erms est
    f16c flexpriority flush_l1d fma fpu fsgsbase fsrm fxsr gfni ht hwp hwp_act_window hwp_epp
    hwp_notify hwp_pkg_req ibpb ibrs ibrs_enhanced ibt ida intel_pt invpcid invpcid_single
    lahf_lm lm mca mce md_clear mmx monitor movbe movdir64b movdiri msr mtrr nonstop_tsc nopl nx
    ospke pae pat pbe pcid pclmulqdq pdcm pdpe1gb pebs pge pku pln pni popcnt pse pse36 pts rdpid
    rdrand rdseed rdt_a rdtscp rep_good sdbg sep serialize sha_ni smap smep ss ssbd sse sse2
    sse4_1 sse4_2 ssse3 stibp syscall tm tm2 tpr_shadow tsc tsc_adjust tsc_deadline_timer
    tsc_known_freq umip vaes vme vmx vnmi vpclmulqdq vpid waitpkg x2apic xgetbv1 xsave xsavec
    xsaveopt xsaves xtopology xtpr
Graphics:
  Device-1: Intel Alder Lake-N [UHD Graphics] vendor: ASRock driver: i915 v: kernel arch: Gen-12.2
    ports: active: none empty: DP-1, DP-2, DP-3, HDMI-A-1 bus-ID: 00:02.0 chip-ID: 8086:46d0
    class-ID: 0300
  Display: server: No display server data found. Headless machine? tty: 183x63
  API: OpenGL Message: GL data unavailable in console for root.
Audio:
  Message: No device data found.
Network:
  Device-1: Intel Ethernet I226-IT vendor: ASRock driver: igc v: kernel pcie: speed: 5 GT/s
    lanes: 1 port: N/A bus-ID: 01:00.0 chip-ID: 8086:125d class-ID: 0200
  IF: enp1s0 state: down mac: 9c:6b:00:5b:af:90
  Device-2: Intel I210 Gigabit Network vendor: ASRock driver: igb v: kernel pcie:
    speed: 2.5 GT/s lanes: 1 port: 4000 bus-ID: 06:00.0 chip-ID: 8086:1533 class-ID: 0200
  IF: enp6s0 state: down mac: 9c:6b:00:01:43:ea
  Device-3: Intel I210 Gigabit Network vendor: ASRock driver: igb v: kernel pcie:
    speed: 2.5 GT/s lanes: 1 port: 3000 bus-ID: 07:00.0 chip-ID: 8086:1533 class-ID: 0200
  IF: enp7s0 state: down mac: 9c:6b:00:01:43:e9
  Device-4: Intel I210 Gigabit Network vendor: ASRock driver: igb v: kernel pcie:
    speed: 2.5 GT/s lanes: 1 port: 6000 bus-ID: 08:00.0 chip-ID: 8086:1533 class-ID: 0200
  IF: enp8s0 state: down mac: 9c:6b:00:5b:af:91
  Device-5: Intel I210 Gigabit Network vendor: ASRock driver: igb v: kernel pcie:
    speed: 2.5 GT/s lanes: 1 port: 5000 bus-ID: 09:00.0 chip-ID: 8086:1533 class-ID: 0200
  IF: enp9s0 state: up speed: 1000 Mbps duplex: full mac: 9c:6b:00:5b:af:92
  Device-6: Quectel Wireless Solutions RM520N-GL type: USB driver: cdc_ether,option,option1
    bus-ID: 4-1:2 chip-ID: 2c7c:0801 class-ID: 0a00 serial: 86485f3
  IF: wwangw state: unknown mac: 9e:9a:a3:e5:44:f8
  IF-ID-1: br-mgmt state: up speed: 1000 Mbps duplex: unknown mac: 9c:6b:00:5b:af:92
  Message: Output throttled. IPs: 1; Limit: 10; Override: --limit [1-x;-1 all]
  IF-ID-2: br-trunk state: up speed: 10 Mbps duplex: unknown mac: 9c:6b:00:5b:af:91
  IF-ID-3: br-wan state: up speed: 10 Mbps duplex: unknown mac: 9c:6b:00:5b:af:90
  IF-ID-4: br-wwan state: up speed: 10 Mbps duplex: unknown mac: 12:2c:87:11:9d:7c
  IF-ID-5: vnet0 state: unknown speed: 10 Mbps duplex: full mac: fe:54:00:3e:b9:47
  IF-ID-6: vnet1 state: unknown speed: 10 Mbps duplex: full mac: fe:54:00:8a:2b:f1
  IF-ID-7: vnet2 state: unknown speed: 10 Mbps duplex: full mac: fe:54:00:fa:79:b2
  IF-ID-8: vnet3 state: unknown speed: 10 Mbps duplex: full mac: fe:54:00:5b:81:2a
  WAN IP: 46.30.234.100
Bluetooth:
  Message: No bluetooth data found.
Logical:
  Device-1: VG: vsrx-h-vg type: LVM2 size: 930.53 GiB free: 702.53 GiB
  LV-1: platform type: linear size: 20 GiB
  Components: p-1: nvme0n1p3
  LV-2: storage type: linear size: 200 GiB
  Components: p-1: nvme0n1p3
  LV-3: swap type: linear size: 8 GiB
  Components: p-1: nvme0n1p3
RAID:
  Message: No RAID data found.
Drives:
  Local Storage: total: 931.51 GiB lvm-free: 702.53 GiB used: 5.27 GiB (0.6%)
  ID-1: /dev/nvme0n1 vendor: Crucial model: CT1000P3SSD8 size: 931.51 GiB speed: 31.6 Gb/s
    lanes: 4 type: SSD serial: 24404B705EAD rev: P9CR313 temp: 39.9 C scheme: GPT
  Message: No optical or floppy data found.
Partition:
  ID-1: / size: 19.52 GiB used: 3.48 GiB (17.8%) fs: ext4 dev: /dev/dm-0
    mapped: vsrx--h--vg-platform label: N/A uuid: N/A
  ID-2: /boot size: 447.1 MiB used: 50.3 MiB (11.2%) fs: ext4 dev: /dev/nvme0n1p2 label: boot
    uuid: 2acd557c-475d-48e2-aeaf-06eadfc92c93
  ID-3: /boot/efi size: 511 MiB used: 5.8 MiB (1.1%) fs: vfat dev: /dev/nvme0n1p1 label: N/A
    uuid: D2E9-B24B
  ID-4: /mnt/storage size: 100 GiB used: 1.73 GiB (1.7%) fs: btrfs dev: /dev/dm-2
    mapped: vsrx--h--vg-storage label: N/A uuid: N/A
Swap:
  ID-1: swap-1 type: partition size: 8 GiB used: 0 KiB (0.0%) priority: -2 dev: /dev/dm-1
    mapped: vsrx--h--vg-swap label: N/A uuid: 87716664-07fe-4ba4-8c9b-925bcab7a4c4
Unmounted:
  Message: No unmounted partitions found.
USB:
  Hub-1: 1-0:1 info: Hi-speed hub with single TT ports: 1 rev: 2.0 speed: 480 Mb/s
    chip-ID: 1d6b:0002 class-ID: 0900
  Hub-2: 2-0:1 info: Super-speed hub ports: 1 rev: 3.1 speed: 20 Gb/s chip-ID: 1d6b:0003
    class-ID: 0900
  Hub-3: 3-0:1 info: Hi-speed hub with single TT ports: 12 rev: 2.0 speed: 480 Mb/s
    chip-ID: 1d6b:0002 class-ID: 0900
  Hub-4: 4-0:1 info: Super-speed hub ports: 4 rev: 3.1 speed: 10 Gb/s chip-ID: 1d6b:0003
    class-ID: 0900
  Device-1: 4-1:2 info: Quectel Wireless Solutions RM520N-GL type: Ethernet Network,CDC-Data
    driver: cdc_ether,option,option1 interfaces: 6 rev: 3.2 speed: 5 Gb/s power: 896mA
    chip-ID: 2c7c:0801 class-ID: 0a00 serial: 86485f3
Sensors:
  System Temperatures: cpu: 45.0 C mobo: N/A
  Fan Speeds (RPM): N/A
Info:
  Processes: 134 Uptime: 8h 16m wakeups: 0 Init: systemd v: 252 target: graphical (5)
  default: graphical Compilers: N/A Packages: pm: dpkg pkgs: 786 Shell: Bash v: 5.2.15
  running-in: pty pts/2 (SSH) inxi: 3.3.26

Appendix 2 - sample /etc/network/interfaces 

source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
iface enp1s0 inet manual
iface enp6s0 inet manual
iface enp7s0 inet manual
iface enp8s0 inet manual
iface enp9s0 inet manual
auto enp1s0 br-wan
iface br-wan inet manual
        bridge_ports enp1s0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        bridge_hw enp1s0
        post-up echo 1 > /sys/class/net/br-wan/bridge/vlan_filtering
        post-up bridge vlan del dev br-wan vid 1 self
auto enp8s0 br-trunk
iface br-trunk inet manual
        bridge_ports enp8s0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        bridge_hw enp8s0
        post-up bridge vlan del dev br-trunk vid 1 self
auto enp9s0 br-mgmt
iface br-mgmt inet static
        address 192.168.100.120/24
        gateway 192.168.100.1
        dns-nameservers 10.0.0.10
        bridge_ports enp9s0
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        bridge_hw enp9s0
        up ip rou add 192.168.6.0/24 via 192.168.100.1
auto br-wwan
iface br-wwan inet manual
        bridge_ports none
        bridge_stp off
        bridge_fd 0
        bridge_maxwait 0
        post-up echo 1 > /sys/class/net/br-wwan/bridge/vlan_filtering
        post-up bridge vlan del dev br-wwan vid 1 self

Appendix 3 – sample vSRX Junos configuration

set groups license apply-flags omit
set groups license system license keys key "DemolabJUNOS417281863 <SNIP>"
set groups license system license keys key "DemolabJUNOS512559343 <SNIP>"
set apply-groups license
set apply-macro wwan wwan-apn internet
set system host-name vsrx
set system root-authentication encrypted-password "<SNIP>"
set system root-authentication ssh-ed25519 "ssh-ed25519 <SNIP>" from 192.168.100.120
set system login announcement "\n Remember to save rescue config upon proven config changes:\n request system configuration rescue save\n\n"
set system services ssh root-login deny-password
set system services ssh sftp-server
set system services ssh ciphers aes256-ctr
set system services ssh ciphers "aes256-gcm@openssh.com"
set system services ssh macs "hmac-sha2-256-etm@openssh.com"
set system services ssh macs "hmac-sha2-512-etm@openssh.com"
set system services ssh key-exchange curve25519-sha256
set system services ssh client-alive-interval 120
set system services ssh hostkey-algorithm-list ed25519
set system services ssh rate-limit 10
set system time-zone Europe/Prague
set system static-host-mapping vmhost inet 192.168.100.120
set system syslog file messages any any
set system syslog file messages archive size 5m
set system syslog file messages archive files 4
set system archival configuration transfer-on-commit
set system archival configuration archive-sites "sftp://user:password@vmhost/upload/"
set system ntp server 192.168.100.120
set security log mode stream
set security log report
set security ssh-known-hosts host vmhost ed25519-key AAAAC3NzaC1lZDI1NTE5AAAAIKdMpsYNB3e4woFRd/CTlV5E20lsZTmsVm2u+872RVSK
set security alg h323 disable
set security alg mgcp disable
set security alg msrpc disable
set security alg sunrpc disable
set security alg rtsp disable
set security alg sccp disable
set security alg sip disable
set security alg talk disable
set security alg tftp disable
set security alg pptp disable
set security forwarding-options resource-manager cpu re 2
set security flow tcp-session strict-syn-check
set security nat source rule-set wan from zone mgmt
set security nat source rule-set wan from zone trust
set security nat source rule-set wan to zone wan
set security nat source rule-set wan rule wan match source-address 0.0.0.0/0
set security nat source rule-set wan rule wan match destination-address 0.0.0.0/0
set security nat source rule-set wan rule wan then source-nat interface
set security policies from-zone trust to-zone wan policy trust-wan-1 match source-address any
set security policies from-zone trust to-zone wan policy trust-wan-1 match destination-address any
set security policies from-zone trust to-zone wan policy trust-wan-1 match application any
set security policies from-zone trust to-zone wan policy trust-wan-1 then permit
set security policies from-zone mgmt to-zone wan policy mgmt-wan-1 match source-address any
set security policies from-zone mgmt to-zone wan policy mgmt-wan-1 match destination-address any
set security policies from-zone mgmt to-zone wan policy mgmt-wan-1 match application any
set security policies from-zone mgmt to-zone wan policy mgmt-wan-1 then permit
set security policies global policy drop-log match source-address any
set security policies global policy drop-log match destination-address any
set security policies global policy drop-log match application any
set security policies global policy drop-log then deny
set security policies global policy drop-log then log session-close
set security policies pre-id-default-policy then log session-close
set security zones security-zone wan interfaces ge-0/0/1.0 host-inbound-traffic system-services ping
set security zones security-zone mgmt tcp-rst
set security zones security-zone mgmt interfaces ge-0/0/0.100 host-inbound-traffic system-services ping
set security zones security-zone trust tcp-rst
set security zones security-zone trust interfaces ge-0/0/0.200 host-inbound-traffic system-services ping
set security zones security-zone wwan interfaces ge-0/0/2.0 host-inbound-traffic system-services ping
set security zones security-zone wwan interfaces ge-0/0/2.0 host-inbound-traffic system-services dhcp
set interfaces ge-0/0/0 description br-trunk-enp8s0-lan3
set interfaces ge-0/0/0 vlan-tagging
set interfaces ge-0/0/0 unit 100 description mgmt-pfe-gw
set interfaces ge-0/0/0 unit 100 vlan-id 100
set interfaces ge-0/0/0 unit 100 family inet address 192.168.100.216/24
set interfaces ge-0/0/0 unit 200 description trust
set interfaces ge-0/0/0 unit 200 vlan-id 200
set interfaces ge-0/0/0 unit 200 family inet address 10.10.200.1/24
set interfaces ge-0/0/1 description br-wan-enp1s0-lan1
set interfaces ge-0/0/1 unit 0 description wan
set interfaces ge-0/0/1 unit 0 family inet address 198.51.100.2/24
set interfaces ge-0/0/2 description br-wwan
set interfaces ge-0/0/2 unit 0 description "wwan 4/5g"
set interfaces ge-0/0/2 unit 0 family inet dhcp retransmission-attempt 50000
set interfaces ge-0/0/2 unit 0 family inet dhcp retransmission-interval 30
set interfaces fxp0 description br-mgmt-enp9s0-lan2
set interfaces fxp0 unit 0 description fxp-mgmt
set interfaces fxp0 unit 0 family inet address 192.168.100.215/24
set routing-instances vr instance-type virtual-router
set routing-instances vr routing-options static route 0.0.0.0/0 next-hop 198.51.100.1
set routing-instances vr interface ge-0/0/0.100
set routing-instances vr interface ge-0/0/0.200
set routing-instances vr interface ge-0/0/1.0
set routing-instances wwan instance-type virtual-router
set routing-instances wwan interface ge-0/0/2.0
set routing-options static route 0.0.0.0/0 next-hop 192.168.100.216

Useful links

Glossary

  • DPDK Data Plane Development Kit
  • DIN Deutsche Institut für Normung
  • HT Hyper Threading
  • KVM Kernel Virtual Machine
  • LV Logical Volume
  • NIC Network Interface Card
  • OVS Open vSwitch
  • PFE Packet Forwarding Engine
  • PPS Packets Per Second
  • RE Routing Engine
  • RCU Read Copy Update
  • SR-IOV Single Root Input Output Virtualization
  • SSD Solid State Disk
  • STP Spanning Tree Protocol
  • TBW Terabytes Written
  • TPM Trusted Platform Module
  • VG Volume Group

Acknowledgements

Thanks to ASRock representative David Wei for delivering and supporting the hardware platform, as well as to all the people who participated in content creation, hardware/software bring-up, and publishing—namely David Kuncar and Nicolas Fevrier. Of course, it would not have been possible to make things happen without all the brilliant open-source software. Finally, thanks to the vSRX/SRX development and product teams for delivering the Swiss Army knife for security and networking. 

Comments

If you want to reach out for comments, feedback, or questions, drop us an email at:

Revision History

Version Author(s) Date Comments
1 Karel Hendrych May 2025 Initial Publication
2 Karel Hendrych June 2025 Minor corrections, addition of appendixes 2 and 3


#SolutionsandTechnology

Permalink