Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Then start VirtualBox and create three new VMs, using 2 GB memory each and choose to point them to an existing harddrive file (you will need to "add" the VMDK files you copied earlier to the "Virtual media manager" in VirtualBox). After creating the new VMs, we need to configure the network adapters. Before we can do this go into the menu  File→Host network manager... in VirtualBox and create a new network called vboxnet1 for example, and enter IP 10.100.2.2 / 255.255.255.0 (no DHCP server). On Linux/MAC you need to allow this IP-range in by creating /etc/vbox/networks.conf and specifying allowed ranges there. For example, to allow 10.0.0.0/8 and 192.168.0.0/16 IPv4 ranges as well as 2001::/64 range put the following lines into /etc/vbox/networks.conf:

      * 10.0.0.0/8 192.168.0.0/16
      * 2001::/64

Then configure the VMs with the following network adapters:

...

NIC4: Internal network: link_d2a1 (Ethernet3)

You will also need to add some static routing on your host so return traffic will find it's way through eosdist1 to the correct VM:

sudo ip route add 10.0.6.0/24 via 10.100.2.101 dev vboxnet1
sudo ip route add 192.168.0.0/24 via 10.100.2.101 dev vboxnet1
sudo ip route add 10.100.3.101/32 via 10.100.2.101 dev vboxnet1
sudo ip route add 10.100.3.102/32 via 10.100.2.102 dev vboxnet1

These commands will not persist through a reboot of the host, and they must be added after Virtualbox/vboxnet1 adapter is started.

Configure VMs

Start up eosdist1 and eosdist2. Login with admin/<enter> when the have booted up, and then enter the command "zerotouch cancel" when the have booted up. Enter a config like this using console/SSH on eosdist1:

...

Eosdist2 only needs a hostname of "eosdist2" for the current tests to run, but you could also configure it in a similar way to eosdist1.

To test that routing through eosdist1 works correctly you can run some ping commands from inside the eosdist1 VM:

ping vrf MGMT 10.100.2.2

ping vrf MGMT 10.100.2.2 source 192.168.0.1

If the first command doesn't work something with the interface configuration might be wrong. If the second command doesn't work, it might be "ip route add" commands in the previos section is missing.

Run integrationtests.sh

Git clone cnaas-nms and go to the directory test/ , there you will find a script called integrationtest.sh . This script will start the necessary docker containers and then begin running some tests for ZTP and so on. Before starting the docker containers we need to create a few volumes:

...

To get authentication working you need a JWT certificate. You can either download this dummy public.pem cert for example and place it inside the API container at /opt/cnaas/jwtcert/public.pem , (or setup some external JWT server like SUNET auth poc).

You are now ready to start the integration tests. When running integrationtests.sh it will wait for a device to enter the DISCOVERED state for 10 minutes, so you can start by booting up eosaccess now, and then start integrationtests.sh. eosaccess should start ZTP boot via DHCP from the DHCPd container started by integrationtests.sh , and then reboot once again. The second time it starts up a job should be scheduled to discover the device, you can check the progress here by tailing logs from the dhcp and api containers like this: "docker logs -f docker_cnaas_dhcpd_1" (or docker_cnaas_api_1).

...

docker exec -i -t docker_cnaas_postgres_1 psql -U cnaas
INSERT INTO linknet (device_a_id, device_b_id, device_a_port, device_b_port) VALUES (3, 2, 'Ethernet3', 'Ethernet2');

Running the API outside of docker

For faster development and debugging you might want to run just the python API part on your local system instead of in a docker container. This is described in https://github.com/SUNET/cnaas-nms README

The docker image runs debian 10 which uses Python3.7.3. If your system python is not using this version you might want to use pyenv:

apt-get/dnf install pyenv
pyenv virtualenv 3.7.3 cnaas373
cd cnaas-nms/
cp etc/db_config.yml.sample /etc/cnaas-nms/db_config.yml
pyenv local cnaas373
python3 -m pip install -r requirements.txt
cd src/
python3 -m cnaas_nms.run

You will need to have postgres and redis running to be able to start the API. You can start just those containers with:

cd docker/
docker-compose up --no-deps -d cnaas_postgres
docker-compose up --no-deps -d cnaas_redis