Prepare a ready-to-use Contrail Command VM to import clusters without Internet connectivity

Contrail Command is deployed using a container called Contrail Command deployer.
Basically, you create a VM and start a container (command deployer) that deploys contrail command…which is 2 containers: the actual command container application and a database. When the installation finishes we expect to have 3 containers on out VM: 2 are active (contrail command and database) while 1 is in exit status (contrail command deployer).
Deployment leverages ansible playbooks.
These playbooks require internet connectivity in order to download needed software packages (e..g Docker CE), to login to juniper docker hub and download containers (contrail command and psql).
This means that, by default, these playbooks cannot be used in an environment without internet.
When running the contrail command deployer, it is possible to specify an action:
– if NO action is specified, then contrail command is deployed but no cluster is imported
– if action “import_cluster” is specified, then contrail command is deployed and a given cluster (e.g. RHOSP cluster) is imported
Be aware, when using the import_cluster action, the deployer both deploys command and imports the cluster.
Anyhow, as we will see, this is not an issue for our use-case.
It is true that it is possible to install command, access the GUI and provision a cluster but, here, I had to import an existing cluster, not provisioning a new one.
The problem I faced was that Contrail Command had to be installed into a lab where there is no internet connectivity but it is needed to import an existing RHOSP cluster.
Running the contrail command deployer with action “import_cluster” will not work as playbook execution will fail when reaching tasks that try to download packages from the internet.
One solution would be to create some local repositories and have the command VM get packages from there.
Anyhow, if you want to avoid the burden of building local repos, we can create a “baseline” VM. This VM is built somewhere else (NOT in the no-internet environment) with internet connectivity, and includes a modified deployer container image that does not require internet connectivity. The idea is to leverage this new container to import the cluster once in an environment with no internet.
1553039945540-screenshot-2019-03-20-09.24.22
The “baseline” VM becomes as a sort of virtual appliance you download and run in your environment.
Think of other Juniper products like Junos Space. You download an OVA/qcow2 image, you deploy the image already containing all the necessary software, adjust some configuration (e.g. interface address) and finally discover devices.
The concept is similar here: we have a qcow2 file with contrail command ready inside it. We simply have to create a VM from that image and import the cluster (and this import action does not require internet connectivity).
The first step is to build this baseline VM. This VM has contrail command installed on it but it is a “cluster-less” VM, meaning that no contrail cluster has been imported.
This procedure is done on a machine that has internet connectivity.
This VM uses Centos 7 as base OS.
We use this shell script to create an empty centos VM with 100G disk:

vm_name=contrail-command-base
vm_suffix=local
root_password=Juniper
stack_password=Juniper
export LIBGUESTFS_BACKEND=direct
sudo qemu-img create -f qcow2 /var/lib/libvirt/images/${vm_name}.qcow2 100G
sudo virt-resize --expand /dev/sda1 /var/lib/libvirt/images/centos7cloudbase.qcow2 /var/lib/libvirt/images/${vm_name}.qcow2
sudo virt-customize -a /var/lib/libvirt/images/${vm_name}.qcow2 \
  --run-command 'xfs_growfs /' \
  --root-password password:${root_password} \
  --hostname ${vm_name}.${vm_suffix} \
  --run-command 'useradd stack' \
  --password stack:password:${stack_password} \
  --run-command 'echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack' \
  --chmod 0440:/etc/sudoers.d/stack \
  --run-command 'sed -i "s/PasswordAuthentication no/PasswordAuthentication yes/g" /etc/ssh/sshd_config' \
  --run-command 'systemctl enable sshd' \
  --run-command 'yum remove -y cloud-init' \
  --selinux-relabel

sudo sudo virt-install --name ${vm_name} \
  --disk /var/lib/libvirt/images/${vm_name}.qcow2 \
  --vcpus=4 \
  --ram=32000 \
  --network network=VLAN-70,model=virtio,mac='00:00:00:92:00:70' \
  --network network=VLAN-71,model=virtio,mac='00:00:00:92:00:71' \
  --network network=VLAN-72,model=virtio,mac='00:00:00:92:00:72' \
  --network network=VLAN-73,model=virtio,mac='00:00:00:92:00:73' \
  --network network=VLAN-74,model=virtio,mac='00:00:00:92:00:74' \
  --virt-type kvm \
  --import \
  --os-variant rhel7 \
  --graphics vnc \
  --serial pty \
  --noautoconsole \
  --console pty,target_type=virtio

The VM is connected to multiple existing bridges (linux bridges in that case). Those bridges connect our VM to RHOSP networks (provisioning, external API, management, tenant, etc…). What matters is that the VM has connectivity on the provisioning network (to talk to Director) and to External API (to talk to overcloud VIP).
Next, we prepare the VM for command:

yum -y update
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce-18.03.1.ce
systemctl start docker

and we get command deployer container image:

docker login hub.juniper.net --username  --password
docker pull hub.juniper.net/contrail/contrail-command-deployer:2003.1.40

After this, we create the command_servers.yml file:

command_servers:
    server1:
        ip:
        connection: ssh
        ssh_user: root
        ssh_pass: Juniper
        sudo_pass: Juniper
        ntpserver: 

        registry_insecure: false
        container_registry: hub.juniper.net/contrail
        container_tag: 2003.1.40
        container_registry_username: ...
        container_registry_password: ...
        config_dir: /etc/contrail

        contrail_config:
            database:
                type: postgres
                dialect: postgres
                password: contrail123
            keystone:
                assignment:
                    data:
                      users:
                        admin:
                          password: contrail123
            insecure: true
            client:
                password: contrail123

We add an entry inside /etc/host so that command can reach the overcloud vip ip on the external api network.

 overcloud overcloud.localdomain

the FQDN name is the one we find for the vip inside /etc/hosts on Openstack controller nodes.
Finally, we deploy contrail command:

docker run -td --net host -v /root/command_servers.yml:/command_servers.yml --privileged --name contrail_command_deployer hub.juniper.net/contrail/contrail-command-deployer:2003.1.40

The deployer installs command and installs all the needed packages and sfotwares.
Once the installation has completed, we can use “docker commit” to create a new image based on the official contrail-command-deployer one:

docker start
docker commit --change='CMD ["bash"]'  interm:v1
docker stop

We set CMD to bash so that it does not start executing any playbook and stays alive indefinitely.
We run that container:

docker run -td --net host -v /root/command_servers.yml:/command_servers.yml --privileged --name interm interm:v1

and connect to it.
Once inside, we can easily locate playbooks:

/-)[root@contrail-command-base /]$ ls contrail-command-deployer/playbooks/
deploy.yml  generate.yml  import_cluster.yml  roles

From there we have to locate all those templates causing the deployer to try connecting to the Internet.
We need to remove these files:

rm -f contrail-command-deployer/playbooks/roles/import_cluster/meta
rm -f contrail-command-deployer/playbooks/roles/docker/meta

Templates in “meta” folder look like this:

---
dependencies:
  - { role: install_packages }

Next, back to the main folder where we locate the deploy.yml file.
There we need to remove a line under “roles”. From:

  roles:
    - create_configs
    - { role: docker, when: ansible_os_family != 'Darwin' }
    - launch_containers
    - init_db

to

  roles:
    - create_configs
    - launch_containers
    - init_db

We basically removed everything getting data from the internet. This includes packages (e.g. docker CE, python pip, …) and docker hub interaction (login, docker images pull).
The idea behind this approach is that all those tasks were already performed so packages are installed and images available locally. For this reason, when we will being the baseline vm to a non-internet lab, having removed those tasks will not be a problem as the VM already contains data normally provided via those tasks.
We exit the container and commit it:

docker commit --change='CMD ["/bin/deploy_contrail_command"]'  ccd_noinst:v1

We also save the new image into a tar so that we can keep somewhere for future usage:

docker save ccd_noinst:v1

We should now have 4 images in our local docker repo:

[root@contrail-command-base ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
ccd_noinst                                           v1                  a1c0462058f8        19 minutes ago      722MB
hub.juniper.net/contrail/contrail-command            2003.1.40           138f3f79f03c        2 months ago        2.08GB
hub.juniper.net/contrail/contrail-command-deployer   2003.1.40           fc82302a9ec0        2 months ago        721MB
circleci/postgres                                    10.3-alpine         2b14bf6f5037        2 years ago         39.5MB

We also remove the 0/0 route by setting DEFROUTE=no into the adequate ifcfg file.
Our VM now has command installed!
Let’s get back to the hypervisor.
We turn off the VM:

virsh destroy contrail-command-base

We create a copy of the qcow2:

cp /var/lib/libvirt/images/contrail-command-base.qcow2 ./cc_base.qcow2

This new qcow2 file is our baseline VM!
Now, we go where command must be installed but there is no internet.
We bring with us baseline qcow2 file (cc_base.qcow2)!
This image comes with:
– no 0/0 route (no way to reach the internet even if it was available. This is useful in case you want to test this approach in a lab with internet but you want to simulate no-internet)
– modified command deployer container image (as a tar file)
– contrail command installed (no cluster imported)
– command_servers.yml file
We copy the baseline image in the standard libvirt images folder:

cp cc_base.qcow2 /var/lib/libvirt/images/cc_jlab.qcow2

and we run the VM:

sudo sudo virt-install --name cc_jlab \
  --disk /var/lib/libvirt/images/cc_jlab.qcow2 \
  --vcpus=4 \
  --ram=32000 \
  --network network=VLAN-70,model=virtio,mac='00:00:00:92:00:70' \
  --network network=VLAN-71,model=virtio,mac='00:00:00:92:00:71' \
  --network network=VLAN-72,model=virtio,mac='00:00:00:92:00:72' \
  --network network=VLAN-73,model=virtio,mac='00:00:00:92:00:73' \
  --network network=VLAN-74,model=virtio,mac='00:00:00:92:00:74' \
  --virt-type kvm \
  --import \
  --os-variant rhel7 \
  --graphics vnc \
  --serial pty \
  --noautoconsole \
  --console pty,target_type=virtio

Please notice, the VM comes up with the IP addresses used when creating the baseline VM. If changes are needed, please apply them.
Log into the vm and enable docker:

systemctl start docker

Finally, run the modified deployer vm with action “import_cluster”:

docker run -td --net host -e orchestrator=tripleo -e action=import_cluster -e undercloud= -e undercloud_password=Juniper -v /root/command_servers.yml:/command_servers.yml --privileged --name cc_import ccd_noinst:v1

You need to provide the provisioning IP of the director.
Once execution is over, the cluster is imported into Command!
Now, you can re-add the 0/0 where needed and connect to command gui.
The big effort is about building the baseline VM. After that, it is possible to re-use that VM in any non-internet environment requiring command.
Ciao
IoSonoUmberto