Cloud

Juniper cRPD 20.4 on Docker Desktop

By mwiget posted 13 days ago

  

Juniper cRPD 20.4 on Docker Desktop

Written by Marcel Wiget


For the first time in history, it is possible to run Junos instances on a Mac or PC laptop to learn and test Junos and its routing protocols. This blog post walks you through getting up and running using Docker Desktop, enabling ssh and applying basic Junos automation scripts via PyEz, all from the comfort of your laptop.

Table of Contents

Some Background

cRPD Overview

Requirements

Load and Run cRPD

SSH and NETCONF cRPD Access

Persistent cRPD Configuration

Connecting cRPD Instances Together

Where To Go From Here?

Some Background

Junos has been the network operating systems on routers and switches for nearly 25 years now, tightly integrated on Juniper hardware. When I joined Juniper 12 years ago as a Systems Engineer, I heard about an unofficial project called “Olive”, which allowed Junos to run in a virtual machine. I used that unofficial code to get up to speed on Junos and the various routing protocols. But it was a bit like “walking on thin ice”, as some features just weren’t supported and there was nowhere to report them to.

A few years ago, the vMX (https://www.juniper.net/us/en/products-services/routing/mx-series/vmx/) was born, which separated the Junos control plane from the forwarding plane, each running in their separate virtual machines. I became an immediate fan and started to deploy them frequently on Linux servers. The resource requirements on memory, CPU and storage are, however, too big to run efficiently on laptops.

Why not just take the control plane daemons from Junos (cli, mgd, rpd and the likes), port them to Linux, package it into a Linux container and ship it? It sounded really hard to me but engineering delivered the first commercial version with in 2019 with Junos 19.2!

cRPD Overview

Containerized routing protocol process (cRPD) is Juniper’s routing protocol daemon (rpd) decoupled from Junos OS and packaged as a Docker container to run in Linux based environments. Wait, this blog talks about running cRPD on Docker Desktop (https://www.docker.com/products/docker-desktop), available only for Mac (running OS/X) and PC (running Windows). Docker Desktop leverages native virtualization technique available on Windows, Hyper-V, and the Hypervisor framework on macOS, to run a small Linux VM, onto which containers can be deployed.

Back to cRPD: rpd runs as a user space application and learns route state via its routing protocols and maintains it in the RIB (Routing Information Base) and downloads the routes into the FIB (forwarding information base) and shares it with the Linux kernel via netlink.
crpd architecture(Source: cRPD Deployment Guide for Linux Server - cRPD on Linux Architecture)

Two additional processes are required for cRPD to become fully functional: cli and mgd, which allow a user (or program) to manage the Junos configuration and retrieve state information.

What about forwarding packets you might wonder by now. Well, that’s left to the Linux kernel (network namespace) the container runs in. The integration is amazingly seamless and opens up various use cases, starting from routing on the host to being the routing daemon in SONiC (https://github.com/Azure/SONiC). In fact, it runs on pretty much any netlink (https://tools.ietf.org/html/rfc3549) supported Linux-based operating system.

Now that you have the background knowledge, let’s get cRPD up and running on your Mac and PC based laptops.

Requirements

Load and Run cRPD

There are only 3 steps left to log into the cRPD CLI on your laptop:

  • Load the downloaded cRPD container image into Docker using “docker load”
  • Launch cRPD via “docker run”
  • Start the CLI via “docker exec”

Figure 1:  the same commands executed on Docker Desktop for PC and Mac.

 First, we need to load the cRPD image into Docker:

Open a terminal window and execute the following command:

$ docker load -i junos-routing-crpd-docker-20.4R1.12.tgz
3277c838545b: Loading layer [============================================>]  3.072kB/3.072kB
c346691c15b5: Loading layer [============================================>]  2.048kB/2.048kB
e62280f5c533: Loading layer [============================================>]  160.2MB/160.2MB
6c1722f8add2: Loading layer [============================================>]   7.68kB/7.68kB
ac78d37485ea: Loading layer [============================================>]  145.4MB/145.4MB
d4392baa1265: Loading layer [============================================>]   7.68kB/7.68kB
bdd1932d4400: Loading layer [============================================>]  4.096kB/4.096kB
fdff2a26df85: Loading layer [============================================>]  3.072kB/3.072kB
477802f79e58: Loading layer [============================================>]   2.56kB/2.56kB
80766dd788ae: Loading layer [============================================>]  4.096kB/4.096kB
23eb329e12ca: Loading layer [============================================>]  4.096kB/4.096kB
eb4d015c7398: Loading layer [============================================>]  4.096kB/4.096kB
feaef00647a6: Loading layer [============================================>]  4.096kB/4.096kB
083e82663c46: Loading layer [============================================>]  4.096kB/4.096kB
ca26467dd365: Loading layer [============================================>]  4.096kB/4.096kB
9339072b9977: Loading layer [============================================>]  4.096kB/4.096kB
11e7cac6c4fc: Loading layer [============================================>]  3.584kB/3.584kB
df610ea1a356: Loading layer [============================================>]  4.096kB/4.096kB
c2f8264c0c3e: Loading layer [============================================>]  4.096kB/4.096kB
652f8fdd61db: Loading layer [============================================>]  4.096kB/4.096kB
3ce266ac9e18: Loading layer [============================================>]  4.096kB/4.096kB
be33bb9d2d79: Loading layer [============================================>]  4.096kB/4.096kB
e887fc8d6cc5: Loading layer [============================================>]  4.096kB/4.096kB
3fcdf9e4fb3c: Loading layer [============================================>]  4.096kB/4.096kB
f1770dd88a3c: Loading layer [============================================>]  59.39kB/59.39kB
8f73951b6cdb: Loading layer [============================================>]  3.072kB/3.072kB
800469be878b: Loading layer [============================================>]  41.47kB/41.47kB
b7d9e0eb1216: Loading layer [============================================>]  1.124MB/1.124MB
Loaded image: crpd:20.4R1.12
 
$ docker images
crpd                     20.4R1.12                f6c7b4b6e0ac   4 weeks ago    366MB

 

The date shown next to the crpd:20.4R1.12 images show when the image was packaged and published by Juniper. Now we can launch cRPD as a daemon using a few command line options:

-ti : attaches a pty to the process to allow interactive console access. While this isn’t strictly required for cRPD to run as a daemon, it is required when running on Mac or PC. Without it, you’ll get the “error: the routing subsystem is not running”.
-d: Launch the container instances in the background, detached from the shell.
--rm: Cleanup after the container terminates by removing the instance volume afterwards.
--name: give the running instance a name we can use to reference it later, e.g. to access the cRPD Junos CLI or stop it.

$ docker run -ti -d –rm –name crpd1 crpd:20.4R1.12
8e83d324d037bf1d7b25a9d9f287ad01b60323e0fe71f1f964c7a5c5839bb8d3
mwjp:Downloads mwiget$ docker ps
CONTAINER ID   IMAGE            COMMAND                 CREATED         STATUS         PORTS                                                                         NAMES
8e83d324d037   crpd:20.4R1.12   "/sbin/runit-init.sh"   3 seconds ago   Up 2 seconds   22/tcp, 179/tcp, 830/tcp, 3784/tcp, 4784/tcp, 6784/tcp, 7784/tcp, 50051/tcp   crpd1

 

And finally log into the CLI, by executing the cli process within the running container:

$ docker exec -ti crpd1 cli
root@ bc3867dfa7c3> show version
Hostname: bc3867dfa7c3
Model: cRPD
Junos: 20.4R1.12
cRPD package version : 20.4R1.12 built by builder on 2020-12-20 13:35:15 UTC
 
root@bc3867dfa7c3> quit

 

Now go ahead and explore the Junos CLI a bit, try out various commands, enter config mode and exit. Version 20.4 and newer now support “show interfaces”, not just the “show interfaces routing”. Remember, cRPD handles neither packet forwarding nor interface configurations. If these commands give you an error like “error: the routing subsystem is not running”, then you likely haven’t used the “-ti” option for “docker run”. At least on:

root@bc3867dfa7c3> show interfaces
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.2  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:ac:11:00:02  txqueuelen 0  (Ethernet)
        RX packets 12  bytes 976 (976.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 
root@bc3867dfa7c3> show interfaces routing
Interface        State Addresses
tunl0            Down  MPLS  enabled
                       ISO   enabled
lo.0             Up    MPLS  enabled
                       ISO   enabled
ip6tnl0          Down  MPLS  enabled
                       ISO   enabled
eth0             Up    MPLS  enabled
                       ISO   enabled
                       INET  172.17.0.2

 

You may be tempted to check connectivity via “ping”. Well, bad luck. There is no ping in the cRPD CLI. Same for accessing the shell:

root@bc3867dfa7c3> ping
                   ^
unknown command.
root@bc3867dfa7c3> shell
                   ^
unknown command.
root@bc3867dfa7c3> quit

 

No problem. You can access ping directly via bash shell in the container, by launching bash instead of cli via ‘docker exec’:

$ docker exec -ti crpd1 bash
 
===>
           Containerized Routing Protocols Daemon (CRPD)
 Copyright (C) 2020, Juniper Networks, Inc. All rights reserved.
                                                                    <===
 
root@bc3867dfa7c3:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=37 time=33.4 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=37 time=29.5 ms
^C
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 29.508/31.473/33.439/1.973 ms
 
root@bc3867dfa7c3:/# cli
root@bc3867dfa7c3> show version
Hostname: bc3867dfa7c3
Model: cRPD
Junos: 20.4R1.12
cRPD package version : 20.4R1.12 built by builder on 2020-12-20 13:35:15 UTC
 
root@bc3867dfa7c3> quit
 
root@bc3867dfa7c3:/# exit
exit

 

Launching an application within the running container can come handy, e.g. if you just want to check a cli show command or try out Netconf interactively:

$ docker exec -ti crpd1 cli show interface routing
Interface        State Addresses
tunl0            Down  MPLS  enabled
                       ISO   enabled
lo.0             Up    MPLS  enabled
                       ISO   enabled
ip6tnl0          Down  MPLS  enabled
                       ISO   enabled
eth0             Up    MPLS  enabled
                       ISO   enabled
                       INET  172.17.0.2

 

Let’s try out Netconf via shell, first, find the rpc command for “show route” using the CLI, then execute the command in Netconf:

$ docker exec -ti crpd1 cli
root@bc3867dfa7c3> show route |display xml rpc
<rpc-reply xmlns:junos="http://xml.juniper.net/junos/20.4R0/junos">
    <rpc>
        <get-route-information>
        </get-route-information>
    </rpc>
    <cli>
        <banner></banner>
    </cli>
</rpc-reply>

 

Now let’s send this netconf command directly to the netconf application in cRPD. Note the use of option ‘-i’ instead of ‘-ti’. Because we pipe the command in, there is no TTY and avoid the error message:

$ echo "<rpc><get-route-information/></rpc>" | docker exec -i crpd1 netconf
<!-- No zombies were killed during the creation of this user interface -->
<!-- user root, class super-user -->
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
  <capabilities>
    <capability>urn:ietf:params:netconf:base:1.0</capability>
    <capability>urn:ietf:params:netconf:capability:candidate:1.0</capability>
    <capability>urn:ietf:params:netconf:capability:confirmed-commit:1.0</capability>
    <capability>urn:ietf:params:netconf:capability:validate:1.0</capability>
    <capability>urn:ietf:params:netconf:capability:url:1.0?scheme=http,ftp,file</capability>
    <capability>urn:ietf:params:xml:ns:netconf:base:1.0</capability>
    <capability>urn:ietf:params:xml:ns:netconf:capability:candidate:1.0</capability>
    <capability>urn:ietf:params:xml:ns:netconf:capability:confirmed-commit:1.0</capability>
    <capability>urn:ietf:params:xml:ns:netconf:capability:validate:1.0</capability>
    <capability>urn:ietf:params:xml:ns:netconf:capability:url:1.0?scheme=http,ftp,file</capability>
    <capability>urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring</capability>
    <capability>http://xml.juniper.net/netconf/junos/1.0</capability>
    <capability>http://xml.juniper.net/dmi/system/1.0</capability>
  </capabilities>
  <session-id>323</session-id>
</hello>
]]>]]>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" xmlns:junos="http://xml.juniper.net/junos/20.4R0/junos">
<route-information xmlns="http://xml.juniper.net/junos/20.4R0/junos-routing">
<!-- keepalive -->
<route-table>
<table-name>inet.0</table-name>
<destination-count>2</destination-count>
<total-route-count>2</total-route-count>
<active-route-count>2</active-route-count>
<holddown-route-count>0</holddown-route-count>
<hidden-route-count>0</hidden-route-count>
<rt junos:style="brief">
<rt-destination>172.17.0.0/16</rt-destination>
<rt-entry>
<active-tag>*</active-tag>
<current-active/>
<last-active/>
<protocol-name>Direct</protocol-name>
<preference>0</preference>
<age junos:seconds="774">00:12:54</age>
<nh>
<selected-next-hop/>
<via>eth0</via>
</nh>
</rt-entry>
</rt>
<rt junos:style="brief">
<rt-destination>172.17.0.2/32</rt-destination>
<rt-entry>
<active-tag>*</active-tag>
<current-active/>
<last-active/>
<protocol-name>Local</protocol-name>
<preference>0</preference>
<age junos:seconds="774">00:12:54</age>
<nh-type>Local</nh-type>
<nh>
<nh-local-interface>eth0</nh-local-interface>
</nh>
</rt-entry>
</rt>
</route-table>
<route-table>
<table-name>inet6.0</table-name>
<destination-count>1</destination-count>
<total-route-count>1</total-route-count>
<active-route-count>1</active-route-count>
<holddown-route-count>0</holddown-route-count>
<hidden-route-count>0</hidden-route-count>
<rt junos:style="brief">
<rt-destination>ff02::2/128</rt-destination>
<rt-entry>
<active-tag>*</active-tag>
<current-active/>
<last-active/>
<protocol-name>INET6</protocol-name>
<preference>0</preference>
<age junos:seconds="774">00:12:54</age>
<nh-type>MultiRecv</nh-type>
</rt-entry>
</rt>
</route-table>
</route-information>
</rpc-reply>
]]>]]>
<!-- session end at 2021-01-18 10:30:21 UTC -->

 

Can you reach the containers IP address from the host shell (PC or Mac)? Unfortunately, not on Mac and PC. Please note, we execute ‘cli show int’ without entering an interactive shell, so we don’t need the ‘-ti’ option. While it won’t hurt for this short output, it will interfere with output longer than 24 lines, because CLI invokes paging. Not specifying ‘-ti’ removes that issue:

$ docker exec crpd1 cli show int
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.2  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:ac:11:00:02  txqueuelen 0  (Ethernet)
        RX packets 19  bytes 1466 (1.4 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4  bytes 280 (280.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 
mwjp:~ mwiget$ ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
^C
--- 172.17.0.2 ping statistics ---
3 packets transmitted, 0 packets received, 100.0% packet loss

 

Ok, so no connectivity from the host to the container for PC and Mac. On Linux, this would have worked perfectly. Does this mean, usage of cRPD on Mac and PC is too limited and one needs to revert to installing and running a Linux VM (e.g. via Virtualbox, Parallels or VMware)? No. The workaround is to launch another container next to cRPD and execute the ping etc. from there.

$ docker run -ti --rm alpine
/ # ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2): 56 data bytes
64 bytes from 172.17.0.2: seq=0 ttl=64 time=0.607 ms
64 bytes from 172.17.0.2: seq=1 ttl=64 time=0.135 ms
^C
--- 172.17.0.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.135/0.371/0.607 ms
/ # quit

 

This will come in handy later, when trying out PyEz against cRPD.

Done! Well, no, that’s just the start of it really. What about ssh and netconf access via PyEZ, what about persistent configuration storage and when do I need and how do I add a license key? Can I build a simple topology of interconnected cRPD instances? Let me address them all, one by one.

First let’s stop and clean up the running container:

$ docker stop crpd1
crpd1
$ docker rm crpd1
Error: No such container: crpd1

The error shown above is fine, it just proves, we launched the container with automatic cleanup option (--rm).

 

SSH and NETCONF cRPD Access

Starting with cRPD 20.4, ssh and Netconf can be configured like on any other Junos device via Junos configuration. Lets launch crpd1, configure root-authentication, create user lab and enable ssh with Netconf.

$ docker run -ti -d --rm --name crpd1 crpd:20.4R1.12
c3223b960b0714466bf0f862a178125ad1e0620d73a696aabdaceec82107b285

I’m showing the committed config here. The password used is ‘lab123’:

$ docker exec -ti crpd1 cli show conf \|display set
set version 20201217.193015.11_builder.r1158818
set system root-authentication encrypted-password "$6$ohjbr$EeZi/hQTlC4AiYUwQZ.28.EXpi6CaFrHoaosbhgyLbbkyoCoysgvp.DaxCkUbOZqXXCtZbUWU0K8RAfUrdxj5/"
set system login user lab uid 2000
set system login user lab class super-user
set system login user lab authentication encrypted-password "$6$nut7S$TWTZFJy6KMum2owQwnjix.gRAkN/1mLFDFrJC/IKA5J4M.ssxTSGmeec/jgq8b.dWFHbgGEdm9/FaW17QcUgQ."
set system services ssh
set system services netconf ssh

 

BTW using ‘\|’ allows the use of pipe between CLI commands to work just fine. We can’t just ssh or use Netconf from the host command shell, so we need to launch a container to do so.

To find out the IP address of crpd1, we can use well known linux commands, executed in crpd1:

$ docker exec -ti crpd1 ip addr show dev eth0
18: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

 

Instead of using a plain alpine or ubuntu container, I’ll start with juniper/pyez, which gets downloaded automatically unless already present locally:

$ docker run -ti --rm juniper/pyez
Unable to find image 'juniper/pyez:latest' locally
latest: Pulling from juniper/pyez
188c0c94c7c5: Already exists
b98271b6163d: Pull complete
260d17b074e7: Pull complete
9b37daaccb80: Pull complete
e26c885360d7: Pull complete
f0c663892ee2: Pull complete
6c2fe3c20419: Pull complete
049fe9a9e8b6: Pull complete
480a0365eb38: Pull complete
Digest: sha256:39ee9b385d23aa3c2ad786ec330e44dc8af93ecc6feca7ce0f715ae3b940f07d
Status: Downloaded newer image for juniper/pyez:latest
bash-5.0#

 

Check to see if the ssh client is already installed. If not, find out what distribution this container is based on:

bash-5.0# ssh root@172.17.0.2
bash: ssh: command not found
bash-5.0# cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.12.1
PRETTY_NAME="Alpine Linux v3.12"
HOME_URL="https://alpinelinux.org/"

 

In this case it’s Alpine. We can add ssh client with `apk add openssh`:

bash-5.0# apk add openssh
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/x86_64/APKINDEX.tar.gz
(1/7) Installing openssh-keygen (8.3_p1-r1)
(2/7) Installing libedit (20191231.3.1-r0)
(3/7) Installing openssh-client (8.3_p1-r1)
(4/7) Installing openssh-sftp-server (8.3_p1-r1)
(5/7) Installing openssh-server-common (8.3_p1-r1)
(6/7) Installing openssh-server (8.3_p1-r1)
(7/7) Installing openssh (8.3_p1-r1)
Executing busybox-1.31.1-r19.trigger
OK: 142 MiB in 76 packages
bash-5.0#

 

Before launching ssh, find the local IP of the alpine container:

bash-5.0# ip ad show dev eth0
20: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

 

The IP addresses of crpd1 and pyez container are in the same subnet. Let’s try out ssh from alpine to crpd1:

bash-5.0# ssh lab@172.17.0.2

The authenticity of host '172.17.0.2 (172.17.0.2)' can't be established.
ECDSA key fingerprint is SHA256:fvlZKOWSC+fn7NewgswXSE7wZHOXEXOL/kVM7TNRIt8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.
Password:
lab@c3223b960b07> show version
Hostname: c3223b960b07
Model: cRPD
Junos: 20.4R1.12
cRPD package version : 20.4R1.12 built by builder on 2020-12-20 13:35:15 UTC
 
lab@c3223b960b07> quit
 
Connection to 172.17.0.2 closed.
bash-5.0#

 

It worked! Without exiting the alpine container, let’s fire up Python and query crpd1 using a tiny script (create that file based on the output shown), then execute it:

bash-5.0# cat test.py
from jnpr.junos import Device
from pprint import pprint
 
with Device(host='172.17.0.2', user='lab', password='lab123') as dev:
        pprint (dev.facts)
 
bash-5.0#
bash-5.0# python3 test.py
/usr/lib/python3.8/site-packages/jnpr/junos/device.py:857: RuntimeWarning: An unknown exception occurred - please report.
  warnings.warn(
{'2RE': None,
 'HOME': '/var/home/lab',
 'RE0': None,
 'RE1': None,
 'RE_hw_mi': None,
 'current_re': None,
 'domain': None,
 'fqdn': 'c3223b960b07',
 'hostname': 'c3223b960b07',
 'hostname_info': {'re0': 'c3223b960b07'},
 'ifd_style': 'CLASSIC',
 'junos_info': {'re0': {'object': junos.version_info(major=(20, 4), type=R, minor=1, build=12),
                        'text': '20.4R1.12'}},
 'master': None,
 'model': 'CRPD',
 'model_info': {'re0': 'CRPD'},
 'personality': None,
 're_info': None,
 're_master': None,
 'serialnumber': None,
 'srx_cluster': None,
 'srx_cluster_id': None,
 'srx_cluster_redundancy_group': None,
 'switch_style': 'NONE',
 'vc_capable': False,
 'vc_fabric': None,
 'vc_master': None,
 'vc_mode': None,
 'version': '20.4R1.12',
 'version_RE0': '20.4R1.12',
 'version_RE1': None,
 'version_info': junos.version_info(major=(20, 4), type=R, minor=1, build=12),
 'virtual': None}
bash-5.0#

 

In case you run an older version of cRPD, port 830 might not respond. Try port 22 instead:

bash-5.0# cat test22.py
from jnpr.junos import Device
from pprint import pprint
 
with Device(host='172.17.0.2', user='lab', password='lab123', port=22) as dev:
        pprint (dev.facts)
 
bash-5.0# python3 test22.py
/usr/lib/python3.8/site-packages/jnpr/junos/device.py:857: RuntimeWarning: An unknown exception occurred - please report.
  warnings.warn(
{'2RE': None,
 'HOME': '/var/home/lab',
 'RE0': None,
 'RE1': None,
 'RE_hw_mi': None,
 'current_re': None,
 'domain': None,
 'fqdn': 'c3223b960b07',
 'hostname': 'c3223b960b07',
 'hostname_info': {'re0': 'c3223b960b07'},
 'ifd_style': 'CLASSIC',
 'junos_info': {'re0': {'object': junos.version_info(major=(20, 4), type=R, minor=1, build=12),
                        'text': '20.4R1.12'}},
 'master': None,
 'model': 'CRPD',
 'model_info': {'re0': 'CRPD'},
 'personality': None,
 're_info': None,
 're_master': None,
 'serialnumber': None,
 'srx_cluster': None,
 'srx_cluster_id': None,
 'srx_cluster_redundancy_group': None,
 'switch_style': 'NONE',
 'vc_capable': False,
 'vc_fabric': None,
 'vc_master': None,
 'vc_mode': None,
 'version': '20.4R1.12',
 'version_RE0': '20.4R1.12',
 'version_RE1': None,
 'version_info': junos.version_info(major=(20, 4), type=R, minor=1, build=12),
 'virtual': None}

 

Both methods work on version 20.4.

Persistent cRPD Configuration

So far, we successfully launched a cRPD container and access its CLI or use programmatic access via NETCONF. But once the container terminates, the applied configuration is lost. You could extract the configuration via “show config” or transfer it using scp. But there is better way.  Simply mount a folder form the host into the cRPD container, where configuration changes are saved and remain available after an instance terminates. This is also a great way to populate a configuration right when an instance launches.

Docker run has an option to mount volumes via its ‘—volume’ argument. A cRPD instance keeps its configuration within the container filesystem under /config. Here how this can be done, first on Mac, then on PC:

$ mkdir crpd1
$ docker run -ti --rm -d --name crpd1 --volume $PWD/crpd1:/config --privileged crpd:20.4R1.12

 

And on PC (replacing $PWD with %CD%):

$ mkdir crpd1
$ docker run -ti --rm -d --name crpd1 --volume %CD%/crpd1:/config --privileged crpd:20.4R1.12

 

Now any configuration change done within the crpd1 instance is saved persistently in the host folder crpd1:

$ ls -l crpd1/
total 16
-rw-r-----  1 mwiget  staff  103 Jan 19 09:37 juniper.conf.1.gz
-rw-r-----  1 mwiget  staff  252 Jan 19 10:23 juniper.conf.gz
drwx------  4 mwiget  staff  128 Jan 19 09:37 license

 

There is also a folder called ‘license’. This one will store license keys added via CLI or thru the Junos configuration.

Connecting cRPD Instances Together

More often than not, a way to build point-to-point links between containers was needed in order to test functionalities requiring true L2 connectivity without a bridge in between, as typically provided by docker networking. A linux veth link generates 2 endpoint interfaces that can be moved into the containers network namespaces. On Mac and PC, it gets a bit trickier, as there is no easy access to the actual Linux VM running the containers. The trick is to launch a helper container to create the veth link and stitch them. That container requires access to the docker socket and share the PID namespace from the Linux kernel, which can all be done using docker run command line options. The “magic” of creating the links and adding them into a running container is done via a container marcelwiget/link-containers:latest, automatically built from this repo: https://github.com/mwiget/link-containers, and published to hub.docker.com: https://hub.docker.com/r/marcelwiget/link-containers

On Mac:

$ cat linked_crpd.sh
#!/bin/bash
mkdir crpd1 crpd2
set -e
 
docker run -ti --rm -d --name crpd1 --volume $PWD/crpd1:/config --privileged crpd:20.4R1.12
docker run -ti --rm -d --name crpd2 --volume $PWD/crpd2:/config --privileged crpd:20.4R1.12
 
docker run --rm --privileged --net none --pid host -v /var/run/docker.sock:/var/run/docker.sock marcelwiget/link-containers crpd1/crpd2
docker run --rm --privileged --net none --pid host -v /var/run/docker.sock:/var/run/docker.sock marcelwiget/link-containers crpd1/crpd2
  
and the same on PC:
 
$ cat linked_crpd.bat
mkdir crpd1 crpd2
 
docker run -ti --rm -d --name crpd1 --volume %cd%/crpd1:/config --privileged crpd:20.4R1.12
docker run -ti --rm -d --name crpd2 --volume %cd%/crpd2:/config --privileged crpd:20.4R1.12
 
docker run --rm --privileged --net none --pid host -v /var/run/docker.sock:/var/run/docker.sock marcelwiget/link-containers crpd1/crpd2
docker run --rm --privileged --net none --pid host -v /var/run/docker.sock:/var/run/docker.sock marcelwiget/link-containers crpd1/crpd2

 

Once executed, you get 2 containers running, connected together with 2 links at eth1 and eth2 with their configurations saved automatically on the host (your laptop) in the folders ./crpd1 and ./crpd2, relative from your current directory you launched the containers from.

$ docker exec -ti crpd1 ip link|grep eth |grep UP
9: eth1@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
11: eth2@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
43: eth0@if44: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default

Where To Go From Here?

So far, we launched just a single cRPD instance on a laptop, but what if you need a larger topology? All doable, but it requires some automation, ideally via docker-compose, which is automatically installed with Docker Desktop. A working example to build a mesh of many instances, that works on OSX (likely too on Windows), can be found here: https://gitlab.com/mwiget/honeycomb-crpd-mesh (search for OSX in the README.md).

A lot more information on cRPD can be found on Juniper Techpubs, e.g. in this cRPD Deployment Gguide for Linux:

https://www.juniper.net/documentation/en_US/crpd/information-products/pathway-pages/deployment/crpd-dep-guide-pwp.pdf

I hope you enjoyed this intro, if you find errors or have suggestions for improvements, please let us know below in the comment section.

 

0 comments
434 views

Permalink