[DEPRECATED] Vagrantfile used to provision the Cumulus Demo Reference Topology
Welcome to the Cumulus Linux Demo Framework, which provides virtual demos of features and configurations with Cumulus Linux. Follow the Prerequisites and Getting Started instructions below to get started.
Once you've followed the above prerequisite/getting-started instructions for your system, you are able to run any of the demos below.
Demos are built upon the Reference Topology as a starting point and then layer specific device configuration on top.
CLDEMO-VAGRANT is the name of this repository which provides a consistent physical topology of VMs which are cabled together in a configuration we refer to as the Reference Topology. This topology provides a consistent simulation topology upon which lots of different configurations can be overlaid. The individual demos provide interface and routing protocol configurations which are applied to this simulation topology.
The Cumulus Linux Demo Framework is built upon a Vagrantfile which builds the Reference Topology. Using this topology, it is possible to demonstrate any feature in Cumulus Linux. It may not be necessary to use all links or all devices but they're present if needed by a particular demo.
This framework of demos is built on a two-tier spine-leaf Clos network with a dedicated out-of-band management network. The Reference Topology built in this repository is used for all Cumulus Networks documentation, demos, and course materials, so many demos will require you to build a topology using the code available in this repository.
This repository makes use of Cumulus VX which is a virtual machine produced by Cumulus Networks to simulate the user experience of configuring a switch using the Cumulus Linux network operating system.
Vagrant is an open source tool for quickly deploying large topologies of virtual machines. Vagrant and Cumulus VX can be used together to build virtual simulations of production networks to validate configurations, develop automation code, and simulate failure scenarios.
Vagrant uses Vagrantfiles to represent the topology.
Vagrant topologies are described in a text file called a "Vagrantfile,"
which is also the filename. A Vagrantfile is a Ruby program that
tells Vagrant which devices to create and how to configure their networks.
vagrant up
will execute the Vagrantfile and create the reference topology
using Virtualbox.
Libvirt/KVM is a high-performance hypervisor that is used on Linux systems ONLY. Vagrantfiles for the Libvirt/KVM hypervisor are also included in this repository. To use them you need to be using a Linux system and follow the Linux setup instructions.
Libvirt/KVM offers several notable advantages over Virtualbox:
As a result this tends to be the most common hypervisor for larger simulations.
Software versions are always changing. At the time of this writing the following versions are known to work well:
The following tasks are completed to make using the topology more convenient.
After the topology comes up, we use vagrant ssh
to log in to the management
device and switch to the cumulus
user. The cumulus
user is able to access
other devices (leaf01, spine02) in the network using its SSH key, and has
passwordless sudo enabled on all devices to make it easy to run administrative
commands. Further, most automation tools (Ansible, Puppet, Chef) are run
from this management server. Most demos assume that you are logged into
the out of band management server as the cumulus
user.
Note that due to the way we simulate the out of band network, it is not possible
to use vagrant ssh
to access in-band devices like leaf01 and leaf02. These
devices must be accessed via the out-of-band management server.
The Reference Topology only specifies the IP addresses used in the Out-of-Band network for maximum flexibility when creating new demos. To see the IP address allocation for the Out-of-Band Network check the IPAM diagram
The topology built using this Vagrantfile does not support vagrant halt
or
vagrant resume
for in-band devices. To resume working with the demos at a later point in time, use
the hypervisor's halt and resume functionality.
In Virtualbox this can be done inside of the GUI by powering off (and later powering-on) the devices involved in the simulation or by running the following CLI commands:
* VBoxManage controlvm leaf01 poweroff
* VBoxManage startvm leaf01 --type headless
When using the libvirt/kvm hypervisor the following commands can be used:
* virsh destroy cldemo-vagrant_leaf01
* virsh start cldemo-vagrant_leaf01
vagrant destroy -f leaf01
vagrant up leaf01
vagrant destroy -f
In order to keep your configuration across Vagrant sessions, you should either save your configuration in a repository using an automation tool such as Ansible, Puppet, or Chef (preferred) or alternatively copy the configuration files off of the VMs before running the "vagrant destroy" command to remove and destroy the VMs involved in the simulation.
One helpful command for saving configuration from Cumulus devices is:
net show configuration files
or
net show configuration command
This command will not show configuration for third-party applications.
Using this demo environment, it is possible to run multiple simulations at once. The procedure varies slightly from hypervisor to hypervisor.
In the Vagrantfile built for Virtualbox there is a line which sets simid= [some integer]
in order to
create unique simulations a text editor can be used to modify the simid value to something unique which
does not match other running simulations on the simulation node.
In the Vagrantfile built for Libvirt (Vagrantfile-kvm) virtual networks are built from link to link using UDP tunnels. In order to make sure that the VMs do not collide with each other. By default the demo uses ports 8000-10000 but these values can be swapped either by:
A). Running the Customize the Topology
workflow below and providing a '-s' argument
OR
B). By modifying the Vagrantfile-kvm directly, swapping the prepending 1000's place for the port numbers to something different
that do not overlap with any running applications or ports. In the example below we're swapping the ports used by the simulation
from 8000-10000 --> 30000-32000.
_port => '8
--> _port => '30
_port => '9
--> _port => '31
This Vagrant topology is built using Topology Converter. To create your own arbitrary topology, we recommend using Topology Converter. This will create a new Vagrantfile which is specific to your environment.For more details on how to make customized topologies, read Topology Converter's documentation.
This can be a bit tricky, to edit the existing topologies you can bring in the required portions of Topology Converter needed to get the job done.
The process looks like what is featured below and is also found in the build.sh
script used to rebuild and update this environment.
vagrant destroy -f
wget https://gitlab.com/cumulus-consulting/tools/topology_converter/-/raw/master/topology_converter.py
mkdir ./templates/
wget -O ./templates/Vagrantfile.j2 https://gitlab.com/cumulus-consulting/tools/topology_converter/-/raw/master/templates/Vagrantfile.j2
# edit topology.dot as desired
python topology_converter.py topology.dot
Before running this demo or any of the other demos in the list below, install VirtualBox and Vagrant.
NOTE: On Windows, if you have HyperV enabled, you will need to disable it as it will conflict with Virtualbox's ability to create 64-bit VMs.
git clone https://github.com/cumulusnetworks/cldemo-vagrant
cd cldemo-vagrant
vagrant up oob-mgmt-server oob-mgmt-switch leaf01
vagrant ssh oob-mgmt-server
ssh leaf01
©2017 Cumulus Networks. CUMULUS, the Cumulus Logo, CUMULUS NETWORKS, and the Rocket Turtle Logo (the “Marks”) are trademarks and service marks of Cumulus Networks, Inc. in the U.S. and other countries. You are not permitted to use the Marks without the prior written consent of Cumulus Networks. The registered trademark Linux® is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis. All other marks are used under fair use or license from their respective owners.
For further details please see: cumulusnetworks.com