Lab preparation with Saltstack: install Salt to mange servers and switches

I was going to install a new small Contrail+k8s cluster to test some new features. As usual, there are some preliminary steps to make the environment ready for installation. This mainly means, configuring the lab IP Fabric, installing packages on servers and prepare installation templates.

Before jumping into this easy but tedious job I told myself “This time…what about doing this in a more modern and devops way?”. I had already had ansible experience in the past so, here, I opted for something new: saltstack.

Saltstack is an infrastructure automation software that can be used to manage your IT/networking equipment. Here, I will use saltstack to provision contrail servers and configure the IP Fabric.

I will not use saltstack to install contrail. It will stop its work when everything is ready for Contrail to be installed. Saltstack prepares the lab! Then I will install contrail, as usual, using contrail tools (e.g. ansible deployer).

Before jumping into the real stuff, let’s understand how saltstack works. This series of posts is not intended to be a saltstack course. Some previous knowledge about saltstack main concepts is needed. Anyhow, when dealing with some “less basic things” I’ll go through their meaning and usage.

How does a saltstack architecture look like?

A server will act as salt master. The master will interact with other devices in order to configure them. As already said, this means configuring the IP fabric devices and setting up the contrail servers properly (install packages, configure interfaces, etc…).

Those devices are commonly called minions and run a process called salt-minion. Each minion runs a sort of agent: the salt-minion process. Communications between master and minions rely on a bus implemented using ZeroMQ. Basically, the salt master tells minions things like “install this software”, “create this file” and so on. Those “orders” are actually remote calls that are executed on the minion.

Let’s make an example:

  • salt-master targets a minion
  • and tells him, using the salt language, “install apache”
  • salt master process will talk with salt minion process
  • the minion listens the master telling him “install apache”
  • the salt minion process interprets “install apache” and locally triggers the needed commands (i.e. on an ubuntu host “apt-get install apache”)

We can say that each contrail server is also a minion. If we plan to create a 1 control + 2 computes cluster, then we will have 3 minions.

Easy right? Yes, as long as we deal with servers.

What if we need to interact with network devices? Often, network devices, for whatever reason, cannot run an agent onboard. To overcome this, salt offers the possibility to use so-called proxy-minions to control such devices. A proxy minion is what we use to control Junos devices. Each Junos devices has a 1:1 mapping to a proxy minion. Proxy minions are hosted on a server. A server can host multiple proxy minions (each proxy requires about 100MB RAM; keep this in mind to dimension your salt architecture). In my lab, there will be an IP Fabric with 3 devices (2 leaves, 1 spine). As a result, on a server designated to host proxies, there will be 2 minion proxy devices.

Summing up:

  • to interact with a real server, salt master will talk with that server directly through the minion process running on the server itself
  • to interact with a junos device, salt master will talk with a proxy minion process running on a server. The proxy minion will interpret salt master messages and will talk with the actual junos device

We said proxy minions are the ones talking with the actual devices. But how? They will use Juniper python library PyEz. For this reason, before installing salt on the server that will run proxies, we have to install pyez.

Now, the overall architecture should be clearer.

Time to start with salt installation. This is what we are going to do:

  • install salt-master software on the server elected as master
  • on contrail control node and compute nodes, install salt-minion software
  • on a server named “proxy” server, install salt-minion packages and, later, run proxy processes

Let’s start. I’m using CentOS VMs; anyhow, salt can be installed on many other platforms like Ubuntu, Rhel, MAC, etc…

On all those devices, run “yum update”, then one of these commands (depending on whether we run Centos7 or Centos8:

yum install https://repo.saltstack.com/py3/redhat/salt-py3-repo-3002.el8.noarch.rpm
OR
yum install https://repo.saltstack.com/py3/redhat/salt-py3-repo-latest.el7.noarch.rpm

Move to the master node and run:

yum install -y salt-master salt-ssh salt-syndic
firewall-cmd --add-port={4505/tcp,4506/tcp} --permanent
firewall-cmd --reload 
systemctl enable salt-master.service
systemctl start salt-master.service

On every minion, run:

yum install -y salt-minion salt-ssh salt-syndic

Next, edit the /etc/salt/minion file to specify how to reach the master by adding this line:

master: <master ip>

Finally, run:

systemctl enable salt-minion.service
systemctl start salt-minion.service

As already said, communications between master and minions leverage a ZeroMQ bus. In order to have communications working, minions have to authenticate to the master. Authentication relies on private/public keys. The master has to accept minions keys.

Let’s assume we installed salt-master packages on the master and salt-minion packages on the control node server. Once processes are up and running on both nodes, we go to the master and run:

[root@salt ~]# salt-key -L
 Accepted Keys:
 Denied Keys:
 Unaccepted Keys:
 cnt_control
 Rejected Keys:

We use salt-key to list keys. As you can see we have one key to be accepted. We accept it:

[root@salt ~]# salt-key -A
 The following keys are going to be accepted:
 Unaccepted Keys:
 cnt_control
 Proceed? [n/Y] y
 Key for minion control accepted.

Now, master and minion cnt_control can talk to each other. We can verify the minion is up by running:

[root@salt salt]# salt-run manage.up
 cnt_control 

Our minion is listed there!

Repeat the same process for all the servers an we should end up with this:

[root@salt salt]# salt-run manage.up
 cnt_compute1
 cnt_compute2
 cnt_control 

All the contrail nodes are salt minions. This means we can use salt to configure and prepare those nodes!

What’s next? Create proxy minions!

Connect to the proxy server and treat it as it was a minion:

yum install https://repo.saltstack.com/py3/redhat/salt-py3-repo-3002.el8.noarch.rpm
OR
yum install https://repo.saltstack.com/py3/redhat/salt-py3-repo-latest.el7.noarch.rpm

yum install -y salt-minion salt-ssh salt-syndic

Now, add this line to both /etc/salt/minion and /etc/salt/proxy:

master: <master ip>

This way both the proxy server and the proxy minion know who the master is.

Next, run:

systemctl enable salt-minion.service
systemctl start salt-minion.service

At this point, the proxy server will present itself to the master as a minion. We accept its key as seen before and end up with:

[root@salt salt]# salt-run manage.up
 cnt_compute1
 cnt_compute2
 cnt_control 
 proxy

The proxy server is a minion itself. Anyhow, we will not use salt to make any action on it.

Back to the master. We need to define junos devices credentials. In salt terminology, we need to create the pillars.
By default, pillar files must be defined in /src/pillar (create the folders if needed).

Inside that folder we create a file called leaf1_creds.sls (sls is salt own extension):

[root@salt srv]# cat pillar/leaf1_creds.sls
 proxy:
   proxytype: junos
   host: 10.102.241.13
   username: root
   password: Embe1mpls
   port: 830

Inside that file we declare the switch management ip along with its credentials and the netconf port.

Pleae notice, the file is in YAML format. All the sls files we are going to see will use YAML.

Remember, the proxy minion will use PyEz to interact with devices. Specifically, it will use the netconf transport layer provided by PyEz. As a consequence, do not forget to enable netconf inside your device:

set system services netsonf ssh

Still inside the pillar folder, we define the so-called top file. Initially, it looks like this:

[root@salt srv]# cat pillar/top.sls
 base:
   'ipf_leaf1':
     - leaf1_creds

What does that file say? We define a minion called ipf_leaf1 which references a file called leaf1_creds.sls. The content of the sls file will be treated as minion variables (pillars in salt terminology). We might see pillar data as the equivalent of group/host vars with Ansible. This concept will be more clear later on when we will configure servers and switches.

Back to our proxy minion. Salt will build pillar for minion ipf_leaf1. Inside pillar, it will load variables provided inside yaml-formatted sls files. As a result, there will be a dictionary/map called proxy. The salt master will deliver pillar data to the corresponding proxy minion and the proxy minion knows he has to look for a map/dictionary called “proxy” in order to know how to connect with its assigned Junos device.

Save the file and restart salt master service:

systemctl restart salt-master.service

Let’s move to the minion. As said before, the junos proxy modules relies on PyEz. Hence, we have to install it:

pip3 install junos-eznc
 pip3 install jxmlease

Finally, we start our proxy minion:

salt-proxy --proxyid=ipf_leaf1 -d

Please notice, the id is “ipf_leaf1”, the same name we used before when configuring the master!

The proxy minion will present itself to the master asking for its key to be accepted. As usual, we do accept it and end up with:

[root@salt salt]# salt-run manage.up
 cnt_compute1
 cnt_compute2
 cnt_control 
 proxy
 ipf_leaf1

Here it is! our proxy minion managing a junos switch, our leaf number 1.

To verify the interaction with the device works we can run from the master:

salt 'ipf_leaf1' junos.facts

This will print device facts confirming the proxy minion was able to open a connection with the device and retrieve the information.

Now, we need to add pillars for every device, update the top file and start proxy minions on the proxy server.

At the end the top file will look like this:

[root@salt srv]# cat pillar/top.sls
 base:
   'ipf_leaf1':
     - leaf1_creds
   'ipf_leaf2':
     - leaf2_creds
   'ipf_spine1':
     - spine1_creds

And all the minions should be up:

[root@salt salt]# salt-run manage.up
 cnt_compute1
 cnt_compute2
 cnt_control 
 proxy
 ipf_leaf1
 ipf_leaf2
 ipf_spine1

Great! everything is in place.
We can start configure and provision our minions
but not now…next time

Ciao
IoSonoUmberto

2 thoughts on “Lab preparation with Saltstack: install Salt to mange servers and switches”

Leave a comment