RPD is the well know routing daemon behind Junos. It is an acronym for Routing Protocol Daemon.
RPD is the one managing routing protocols and making Juniper products first-clas network devices.
As the software world looks at containers more and more, RPD now has its containerized version: cRPD. That is RPD running inside a container.
Adding a “c” is way more than simply saying “we move a process into a container”. This is a routing process, able to run and manage all those advanced protocols that keep the Internet up and running.
cRPD is a small footprint routing control plane that you can run wherever a container can run…and this is not something to underestimate. Why? Because it allows to transform devices into Junos powered routers 🙂
To better understand what I mean, I’m going to transform an ubuntu server VM into a router leveraging Junos routing capabilities.
Our server looks like this:
The Server run an operative system. I pictured a “generic” Linux/Unix-based OS as, realistically, most time cRPD will run on such OSs. cRPD can run on Centos, Fedora, RHEL, etc… Anyhow, here I opted for Ubuntu 18.04 to install kernel version 5 as linux kernel version 4.5 or higher is needed in order to make use of MPLS features.
Inside the server, we will run the docker engine and, on top of docker, a cRPD container will be created.
The container will be creating in networking mode (start from here to understand the different networking modes available with docker).
Simply put, we use host networking mode as it enables a container to attach to the host’s network, meaning the configuration inside the container matches the configuration outside the container.. What does this mean? Assume my server has 3 interfaces: eth0, eth1 and eth2. In host networking mode, cRPD will “see” all those three interfaces. That is exactly what we want if we desire to transform our server into a router; the entity responsible for routing must see all the available interfaces.
Let’s start building our “rouver”!
First, we set up Ubuntu properly.
We obtain kernel version 5:
sudo apt-get install --install-recommends linux-generic-hwe-18.04
Next, we install docker:
apt-get update
apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
apt-get update
apt-get install docker-ce docker-ce-cli containerd.io
cRPD will not be able to configure interface apart from ISO addresses used for ISIS (this is something you will probably do on the loopback interface only).
For this reason, we configure interface using Ubuntu network files:
auto eth0
iface eth0 inet static
address 10.49.100.149
netmask 255.255.224.0
#network 10.49.100.0
#broadcast 10.49.100.255
gateway 10.49.127.254
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 10.49.32.95 10.49.32.97
dns-search englab.juniper.net dcbg.juniper.net jnpr.net juniper.net
auto ens3f1
iface ens3f1 inet static
address 192.168.1.2
netmask 255.255.255.252
auto ens3f2
iface ens3f2 inet static
address 192.168.2.2
netmask 255.255.255.252
As you can see we have 3 interfaces.
Same consideration is valid for the loopback interface. Here, I configure it using “ip”:
ip addr add 1.1.1.11/32 dev lo
In order to use MPLS we have to enable some modules:
modprobe mpls_iptunnel
modprobe mpls_router
We verify those modules are active:
root@rouver1:~# lsmod | grep mpls
mpls_iptunnel 20480 0
mpls_router 40960 1 mpls_iptunnel
ip_tunnel 24576 1 mpls_router
At this point, we have to get cRPD image.
Let’s check the available releases:
[root@rouver ~]# curl -s https://<user>:<password>@hub.juniper.net/v2/routing/crpd/tags/list | jq .tags
[
"19.2R1.8",
"19.2R1",
"19.4R1.10",
"20.1R1.11"
]
We pull the desired image:
docker login hub.juniper.net -u <user> -p <password>
docker pull hub.juniper.net/routing/crpd:20.1R1.11
We create two volumes:
docker volume create crpdumb-conf
docker volume create crpdumb-logs
The first one will contain configuration files, the second one the logs.
We said before that we are going to run the container in networking mode. Using this mode, container will use the host OS network namespace. This also includes listening ports. This detail matters! Why? Our server allows remote access via SSH, meaning port 22 is busy and used by the host.
By design, cRPD will also listen on that port to allow netconf over ssh access. Once cRPD starts, it will try to bind port 22 but, sharing the same network stack of the host OS, it will find it already in use. As a result, cRPD ssh daemon will fail.
In order to avoid this, we need cRPD to provide ssh on a different port.
This is achieved by creating an alternative sshd_config file which is mounted into the container when creating it. The file looks like this:
Port 8022
PermitRootLogin yes
ChallengeResponseAuthentication no
UsePAM yes
X11Forwarding yes
PrintMotd no
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
We call this file ssh8022.
Finally, we run cRPD:
docker run --rm --detach --name crpd1 -h crpd1 --net=host --privileged -v crpdumb-conf:/config -v crpdumb-logs:/var/log --mount type=bind,source=/root/ssh8022,target=/etc/ssh/sshd_config -it hub.juniper.net/routing/crpd:20.1R1.11
Let’s go through all the options:
- –rm, remove container when “exited”
- –detach, run in detached mode
- –name, container name
- -h, hostname to be configured
- –net=host, set host network mode
- -v, map volumes to container internal paths (config and logs)
- –mount, to copy custom sshd_config file
- -it, interactive mode
Now, we access our cRPD:
root@rouver1:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
86d09691dae7 hub.juniper.net/routing/crpd:20.1R1.11 "/sbin/runit-init.sh" 23 hours ago Up 23 hours crpd1
root@rouver1:~# docker exec -it 86 cli
First, we configure root password:
root@crpd1# set system root-authentication plain-text-password
We start a shell and check listening ports:
root@crpd1> start shell
netstat -tln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:179 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:666 0.0.0.0:* LISTEN
tcp6 0 0 :::179 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
service ssh restart
Restarting OpenBSD Secure Shell server sshd [ OK ]
netstat -tln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:179 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8022 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:666 0.0.0.0:* LISTEN
tcp6 0 0 :::179 :::* LISTEN
tcp6 0 0 :::8022 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
Now, we should be able to use netconf with crpd. To verify this we try to connect from another box:
umanferdini@box:~$ ssh root@10.49.100.149 -p 8022 netconf
root@10.49.100.149's password:
urn:ietf:params:netconf:base:1.0
urn:ietf:params:netconf:capability:candidate:1.0
urn:ietf:params:netconf:capability:confirmed-commit:1.0
urn:ietf:params:netconf:capability:validate:1.0
urn:ietf:params:netconf:capability:url:1.0?scheme=http,ftp,file
urn:ietf:params:xml:ns:netconf:base:1.0
urn:ietf:params:xml:ns:netconf:capability:candidate:1.0
urn:ietf:params:xml:ns:netconf:capability:confirmed-commit:1.0
urn:ietf:params:xml:ns:netconf:capability:validate:1.0
urn:ietf:params:xml:ns:netconf:capability:url:1.0?scheme=http,ftp,file
urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring
http://xml.juniper.net/netconf/junos/1.0
http://xml.juniper.net/dmi/system/1.0
421
]]>]]>
We can manage cRPD with netconf (it will be handy later…).
Let’s check interfaces. We do not have “show interfaces terse” but:
root@crpd1> show interfaces routing
Interface State Addresses
lo.0 Up MPLS enabled
ISO enabled
INET 1.1.1.11
ens3f2 Up MPLS enabled
ISO enabled
INET 192.168.2.2
INET6 fe80::5468:a3ff:fe16:14c
ens3f1 Up MPLS enabled
ISO enabled
INET 192.168.1.2
INET6 fe80::5468:a3ff:fe16:14a
ens3f0 Up MPLS enabled
ISO enabled
INET 10.49.100.149
INET6 fe80::5468:a3ff:fe16:146
docker0 Down MPLS enabled
ISO enabled
INET 172.17.0.1
As you can see, cRPD sees all the interfaces.
It also sees docker default bridge docker0. That bridge is down as no container is attached to it.
Connected to the server interface ens3f1 we have a MX router.
Let’s configure cRPD:
set policy-options policy-statement exp-und term lo from interface lo.0
set policy-options policy-statement exp-und term lo then accept
set policy-options policy-statement exp-und then reject
set policy-options policy-statement lb then load-balance per-packet
set routing-instances vrf1 interface br-a7344ee557f8
set routing-instances vrf1 instance-type vrf
set routing-instances vrf1 route-distinguisher 1.1.1.11:100
set routing-instances vrf1 vrf-target target:65000:100
set routing-instances vrf1 vrf-table-label
set routing-options forwarding-table export lb
set routing-options router-id 1.1.1.11
set routing-options autonomous-system 65000
set protocols bgp group routers type external
set protocols bgp group routers export exp-und
set protocols bgp group routers local-as 65501
set protocols bgp group routers neighbor 192.168.1.1 peer-as 65511
and check bgp session status (be aware that you need a license to use BGP. A trial license is available here):
root@crpd1# run show bgp summary
Threading mode: BGP I/O
Groups: 1 Peers: 1 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
inet.0
0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped…
192.168.1.1 65511 3 4 0 0 33 Establ
inet.0: 0/0/0/0
Up and running! Our Ubuntu server is talking BGP…and not any BGP…Junos BGP!
Let’s see what we receive from physical MX:
root@crpd1# run show route receive-protocol bgp 192.168.1.1
inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
Prefix Nexthop MED Lclpref AS path
2.2.2.11/32 192.168.1.1 65511 I
MX is sending its loopack.
Route is now available in cRPD RIB:
root@crpd1# run show route protocol bgp
inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
= Active Route, - = Last Active, * = Both
2.2.2.11/32 *[BGP/170] 00:00:38, localpref 100
AS path: 65511 I, validation-state: unverified
> to 192.168.1.1 via ens3f1
But what about the FIB? To answer that we need to ask ourselves what we have under cRPD….which is Ubuntu…the FIB will be Ubuntu FIB!
root@rouver1:~# ip route
default via 10.49.127.254 dev ens3f0 proto dhcp src 10.49.100.149 metric 100
2.2.2.11 via 192.168.1.1 dev ens3f1 proto 22
10.49.96.0/19 dev ens3f0 proto kernel scope link src 10.49.100.149
10.49.127.254 dev ens3f0 proto dhcp scope link src 10.49.100.149 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
192.168.1.0/30 dev ens3f1 proto kernel scope link src 192.168.1.2
192.168.2.0/30 dev ens3f2 proto kernel scope link src 192.168.2.2
There it is! Our ubuntu server learned via BGP how to reach 2.2.2.11. It really is a router now! Not bad 🙂
And more to come…
Ciao
IoSonoUmberto
One thought on “Transform your server into a Junos powered router with cRPD”