...
Download the VMDK for Arista vEOS (tested with vEOS-lab-4.22.3M.vmdk , possibly issues with early 4.23 releases?). Copy this file into three new VMDK files that we will use to create the actual VMs:
- eosdist1.vmdk
...
- eosdist2.vmdk
...
- eosaccess.vmdk
Then start VirtualBox and create three new VMs, using 2 GB memory each and choose to point them to an existing harddrive file (you will need to "add" the VMDK files you copied earlier to the "Virtual media manager" in VirtualBox). After creating the new VMs, we need to configure the network adapters. Before we can do this go into the menu File→Host network manager... in VirtualBox and create a new network called vboxnet1 for example, and enter IP 10.100.2.2 / 255.255.255.0 (no DHCP server). On Linux/MAC you need to allow this IP-range in by creating /etc/vbox/networks.conf
and specifying allowed ranges there. For example, to allow 10.0.0.0/8 and 192.168.0.0/16 IPv4 ranges as well as 2001::/64 range put the following lines into /etc/vbox/networks.conf
:
...
Then configure the VMs with the following network adapters:
eosdist1:
Make sure all NICs are "Intel PRO/1000 T Server" or Desktop, but not "MT", and NIC 2-4 have Promiscous mode: Allow all
- NIC1: NAT (Management1)
- NIC2: Host-only adapter: cnaas (Ethernet1)
- NIC3: Internal network: link_d1a1 (Ethernet2)
- NIC4: Internal network: link_d1d2 (Ethernet3)
eosdist2:
Make sure all NICs are "Intel PRO/1000 T Server" or Desktop, but not "MT", and NIC 2-4 have Promiscous mode: Allow all
- NIC1: NAT (Management1)
- NIC2: Host-only adapter: cnaas (Ethernet1)
- NIC3: Internal network: link_d2a1 (Ethernet2)
- NIC4: Internal network: link_d1d2 (Ethernet3)
eosaccess:
Make sure all NICs are "Intel PRO/1000 T Server" or Desktop, but not "MT", and NIC 2-4 have Promiscous mode: Allow all
- NIC1: NAT (Management1)
- NIC2: Host-only adapter: cnaas (Ethernet1)
- NIC3: Internal network: link_d1a1 (Ethernet2)
- NIC4: Internal network: link_d2a1 (Ethernet3)
You will also need to add some static routing on your host so return traffic will find it's way through eosdist1 to the correct VM:
...
Start up eosdist1 and eosdist2. Login with admin/<enter> when the have booted up, and then enter the command "zerotouch cancel
" when the have booted up. Enter a config like this using console/SSH on eosdist1:
...
If the first command doesn't work something with the interface configuration might be wrong. If the second command doesn't work, it might be "ip route add" commands in the previos previous section is missing.
Run integrationtests.sh
Git clone cnaas-nms and go to the directory test/ , there you will find a script called integrationtest.sh . This script will start the necessary docker containers and then begin running some tests for ZTP and so on. Before starting the docker containers we need to create a few volumes:
...
To get authentication working you need a JWT certificate. You can either download this dummy public.pem cert for example and place it inside the API container at /opt/cnaas/jwtcert/public.pem (or setup some external JWT server like SUNET auth poc).
You are now ready to start the integration tests. When running integrationtests.sh it will wait for a device to enter the DISCOVERED state for 10 minutes, so you can start by booting up eosaccess now, and then start integrationtests.sh. eosaccess should start ZTP boot via DHCP from the DHCPd container started by integrationtests.sh , and then reboot once again. The second time it starts up a job should be scheduled to discover the device, you can check the progress here by tailing logs from the dhcp and api containers like this: "docker logs -f docker_cnaas_dhcpd_1
" (or docker_cnaas_api_1).
After a device in state DISCOVERED has been found ZTP will automatically start and then the rest of integration tests will run. After the integration tests has completed you will get a prompt to continue, if you want to log in to VMs or docker containers to check some results or errors now is a good time, otherwise press Enter to continue. Next the script will wait for jobs to finish and then run some unit tests. Once all this is completed the code coverage results will be gathered and optionally uploaded to codecov.io. Code coverage reports will be mapped to your currently checked out branch of git.
...
After running integrationtests.sh 5 containers should be started. You can check their status with the command "docker ps
":
Code Block |
---|
[johanmarcusson@indy-x1 docker]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9d1c5ecd239e docker_cnaas_dhcpd "/bin/sh -c 'supervi…" 3 minutes ago Up 3 minutes 0.0.0.0:67->67/udp docker_cnaas_dhcpd_1 0b71d9f984b1 docker_cnaas_api "/bin/sh -c 'supervi…" 3 minutes ago Up 3 minutes 0.0.0.0:443->1443/tcp docker_cnaas_api_1 d21cfc296bc0 docker.sunet.se/auth-server-poc:latest "/bin/sh -c 'supervi…" 3 minutes ago Up 3 minutes 0.0.0.0:2443->1443/tcp docker_cnaas_auth_1 d933a4324e1c docker.sunet.se/cnaas/httpd "/bin/sh -c 'supervi…" 3 minutes ago Up 3 minutes 1443/tcp, 0.0.0.0:80->1180/tcp docker_cnaas_httpd_1 6b3bee0d55ce docker_cnaas_postgres "docker-entrypoint.s…" 9 days ago Up 9 days 0.0.0.0:5432->5432/tcp docker_cnaas_postgres_1 53c953d2db8f docker_cnaas_redis "docker-entrypoint.s…" 9 days ago Up 9 days 0.0.0.0:6379->6379/tcp docker_cnaas_redis_1 |
If you have some other service running on port tcp 443 or udp 67 for example that container will not be able to start. Use "netstat -anp | grep <port>
" to find out what program on your computer might be conflicting with the container and stop it.
...
For faster development and debugging you might want to run just the python API part on your local system instead of in a docker container. This is described in https://github.com/SUNET/cnaas-nms README.
The docker image runs debian 10 which uses Python3.7.3. If your system python is not using this version you might want to use pyenv:
...