Introduction
This
chapter explains how to create and deploy a simple SONiC-based Clos topology in
WSL using Containerlab. First, we open VS Code from WSL to create and edit a
topology definition file. Next, we build the topology by defining nodes (SONiC
switches and Linux hosts) and the links between them. Before deploying the lab,
we verify the wiring with Containerlab’s built-in topology graph. Finally, we
deploy the topology and validate access to the nodes using both a Linux shell
and the SONiC CLI (vtysh).
Phase 1: Integrate VS Code with WSL
There
are a couple of ways to use VS Code with WSL. In this lab, we launch VS Code
from the WSL terminal using code .. The first time you run this
command, VS Code installs the VS Code Server components inside WSL and then
opens a VS Code window connected to the Linux environment. After the
installation completes, running code . from any directory opens that
folder directly in VS Code.
nwkt@Toni:~$
code .
Updating
VS Code Server to version 034f571df509819cc10b0c8129f66ef77a542f0e
Removing
previous installation...
Installing
VS Code Server for Linux x64 (034f571df509819cc10b0c8129f66ef77a542f0e)
Downloading:
100%
Unpacking:
100%
Unpacked
3505 files and folders to
/home/nwkt/.vscode-server/bin/034f571df509819cc10b0c8129f66ef77a542f0e.
Looking
for compatibility check script at
/home/nwkt/.vscode-server/bin/034f571df509819cc10b0c8129f66ef77a542f0e/bin/helpers/check-requirements.sh
Running
compatibility check script
Compatibility
check successful (0)
nwkt@Toni:~$
Example 2-3:
Open VS Code from WSL (install VS Code Server on first run).
Phase 2: Create Topology File
It
is a good practice to create a consistent folder structure for your lab
projects. Example 2-1 shows a simple directory layout using the tree
command. If tree is not installed, you can add it
with sudo
apt install tree.
nwkt@Toni:~$ tree
.
├── clos-lab
│ ├── host-config
│ └── switch-config
└── snap
5 directories, 0 files
Example 2-1:
Project folder structure.
After
creating the folder structure, run code . from the clos-lab
directory to open VS Code in the correct working folder. In VS Code, create a
new file and name it lab-1.clab.yml (or another name ending in .clab.yml).
Because VS Code was opened from the correct folder, the file is saved directly
under clos-lab.
Figure 2-1:
VS Code: open a new file.
Next,
use the Ctrl+K, M keyboard shortcut to open the language mode selection
drop-down menu and select YAML.
Figure 2-2:
VS Code: select language mode.
A
Containerlab topology file defines the nodes to start (and their container
images) and how those nodes are connected with links. The file begins with a
lab name, for example name: nwkt-01. Containerlab uses this value as
part of the container naming convention. For example, the node spine-1
is created as clab-nwkt-01-spine-1.
Under
the topology:
key, the nodes: section defines each node. In this
chapter we use kind: sonic-vs with image: docker-sonic-vs:latest
for the SONiC switches, and kind: linux with image: alpine:latest
for the hosts. A node’s kind tells Containerlab how to boot the
node and what features it supports. It also affects how interface names are
interpreted for link endpoints.
When
using kind:
sonic-vs, Containerlab connects the container’s management interface
to its management network on eth0. Data-plane interfaces start at eth1
and map to SONiC front-panel ports. For example, in a sonic-vs container eth1
maps to Ethernet0
and eth2
maps to Ethernet4.
This is why the links in Example 2-2 use Linux-style names such as spine-1:eth1
and leaf-1:eth1.
The
links:
section describes how nodes are wired together. Each link has two endpoints.
For example, endpoints: ["spine-1:eth1", "leaf-1:eth1"]
creates a point-to-point link between spine-1 and leaf-1 using their first
data-plane interfaces.
name: nwkt-01
topology:
nodes:
spine-1:
kind: sonic-vs
image: docker-sonic-vs:latest
leaf-1:
kind: sonic-vs
image: docker-sonic-vs:latest
leaf-2:
kind: sonic-vs
image: docker-sonic-vs:latest
host-1:
kind: linux
image: alpine:latest
host-2:
kind: linux
image: alpine:latest
links:
# Connections for Leaf-1
- endpoints:
["spine-1:eth1", "leaf-1:eth1"]
-
endpoints: ["leaf-1:eth2", "host-1:eth1"]
# Connections for Leaf-2
- endpoints: ["spine-1:eth2",
"leaf-2:eth1"]
- endpoints: ["leaf-2:eth2",
"host-2:eth1"]
Example 2-2:
Containerlab topology file: lab-1.clab.yml.
Containerlab
topology files typically use the .clab.yml or .clab.yaml
extension. When you run containerlab deploy without specifying a topology file,
Containerlab looks for a single .clab.yml or .clab.yaml file in the current
directory. If multiple matching files exist, use -t to select the desired file (for
example, containerlab deploy -t lab-1.clab.yml). Using
the .yml extension is common, but .yaml works as well.
Create
the topology file as shown in Example 2-2. VS Code provides indentation guides
and syntax highlighting for YAML, which makes the file easier to read and helps
you avoid indentation errors. Save the file in the clos-lab
folder.
Figure 2-3:
VS Code YAML editing with indentation and syntax highlighting.
nwkt@Toni:~$ tree
.
├── clos-lab
│ ├── host-config
│ ├── lab-1.clab.yml
│ └── switch-config
└── snap
5 directories, 1 file
Example 2-4:
Folder and file structure.
Phase 3: Verify Wiring
Before
deploying the topology, it is a good idea to verify that the wiring is correct.
Containerlab includes a built-in visualization tool that generates a graphical
representation of the topology. The command sudo containerlab graph -t lab-1.clab.yml
starts a small local web server (by default on port 50080) and prints one or
more URLs you can open in a browser. This is a useful sanity check before
deployment, for example, to confirm that spine-1 is connected to the correct
interface on leaf-1.
nwkt@Toni:~/clos-lab$ sudo containerlab graph -t lab-1.clab.yml
13:57:22 INFO Parsing &
checking topology file=lab-1.clab.yml
13:57:22 INFO Serving topology
graph
addresses=
│ http://10.255.255.254:50080
│
http://172.25.109.88:50080
│ http://172.17.0.1:50080
│ http://172.20.20.1:50080
│
http://[3fff:172:20:20::1]:50080
Example 2-5:
Generate a graphical topology view.
Figure 2-4:
Graphical topology view (URL http://172.25.109.88:50080 ).
Phase 4: Deploy Topology File
After
saving lab-1.clab.yml,
deploy the lab with sudo containerlab deploy (or
explicitly specify the file with -t lab-1.clab.yml).
Containerlab parses the topology file, creates a lab directory (clab-<lab-name>),
starts the containers, and connects them with the defined links. In the summary
table, the Name column shows the full container names (used with docker
commands), and the IPv4/6 Address column shows the management IP addresses
assigned on the Containerlab management network.
nwkt@Toni:~/clos-lab$ sudo containerlab deploy
11:54:41 INFO Containerlab started
version=0.74.3
11:54:41 INFO Parsing &
checking topology file=lab-1.clab.yml
11:54:41 INFO Creating lab
directory path=/home/nwkt/clos-lab/clab-nwkt-01
11:54:41 INFO Creating container
name=host-1
11:54:41 INFO Creating container
name=host-2
11:54:41 INFO Creating container
name=leaf-1
11:54:41 INFO Creating container
name=leaf-2
11:54:41 INFO Creating container
name=spine-1
11:54:42 INFO Created link:
spine-1:eth1 ▪┄┄▪ leaf-1:eth1
11:54:42 INFO Created link:
leaf-1:eth2 ▪┄┄▪ host-1:eth1
11:54:43 INFO Created link:
spine-1:eth2 ▪┄┄▪ leaf-2:eth1
11:54:43 INFO Created link:
leaf-2:eth2 ▪┄┄▪ host-2:eth1
11:54:43 INFO Adding host entries
path=/etc/hosts
11:54:43 INFO Adding SSH config
for nodes path=/etc/ssh/ssh_config.d/clab-nwkt-01.conf
11:54:43 INFO containerlab version
🎉=
│ A newer containerlab version (0.75.0) is available!
│ Release notes: https://containerlab.dCustomerev/rn/0.75/
│ Run 'clab version upgrade' or see https://containerlab.dev/install/
for other installation options.
╭──────────────────────┬────────────────────────┬─────────┬───────────────────╮
│ Name │
Kind/Image │ State
│ IPv4/6 Address │
├──────────────────────┼────────────────────────┼─────────┼───────────────────┤
│ clab-nwkt-01-host-1 │ linux │ running │ 172.20.20.2 │
│ │ alpine:latest │ │ 3fff:172:20:20::2 │
├──────────────────────┼────────────────────────┼─────────┼───────────────────┤
│ clab-nwkt-01-host-2 │ linux │ running │ 172.20.20.6 │
│ │ alpine:latest │ │ 3fff:172:20:20::6 │
├──────────────────────┼────────────────────────┼─────────┼───────────────────┤
│ clab-nwkt-01-leaf-1 │ sonic-vs │ running │ 172.20.20.5 │
│ │ docker-sonic-vs:latest
│ │ 3fff:172:20:20::5 │
├──────────────────────┼────────────────────────┼─────────┼───────────────────┤
│ clab-nwkt-01-leaf-2 │ sonic-vs │ running │ 172.20.20.4 │
│ │ docker-sonic-vs:latest
│ │ 3fff:172:20:20::4 │
├──────────────────────┼────────────────────────┼─────────┼───────────────────┤
│ clab-nwkt-01-spine-1 │
sonic-vs │ running │
172.20.20.3 │
│ │ docker-sonic-vs:latest
│ │ 3fff:172:20:20::3 │
╰──────────────────────┴────────────────────────┴─────────┴───────────────────╯
nwkt@Toni:~/clos-lab$
Example 2-6:
Topology deployment output.
After deploying the topology, you can use tree to review the lab directory and related files created during the deployment.
nwkt@Toni:~$ tree
.
├──
clos-lab
│
├── clab-nwkt-01
│ │ ├──
ansible-inventory.yml
│ │ ├──
authorized_keys
│ │ ├──
leaf-1
│ │ ├──
leaf-2
│ │ ├──
nornir-simple-inventory.yml
│ │ ├──
spine-1
│ │ └──
topology-data.json
│ ├── host-config
│
├── lab-1.clab.yml
│ └── switch-config
└── snap
9 directories, 5 files
Example 2-7:
Updated folder structure after deployment.
Example
2-8 shows how to verify the status of the containers using docker ps.
The --format
option prints a readable table with the container ID, name, and status.
nwkt@Toni:~$ docker ps -a
--format "table {{.ID}}\t{{.Names}}\t{{.Status}}"
CONTAINER ID NAMES STATUS
c67cbb5fe8e8 clab-nwkt-01-host-1 Up 36 minutes
1696b2865f8e clab-nwkt-01-host-2 Up 36 minutes
b7517c417137 clab-nwkt-01-leaf-2 Up 36 minutes
810267f0cf2b clab-nwkt-01-leaf-1 Up 36 minutes
60c37f941005 clab-nwkt-01-spine-1 Up 36 minutes
0c01df3ef211 adoring_brattain Exited (0) 6 days ago
Example 2-8:
List containers and verify status.
Phase 5: Test Connection – Log In to Nodes
As
a final step, verify that you can access the nodes. To open a Linux shell
inside a node container, run docker exec -it clab-nwkt-01-leaf-1 bash.
From the shell, start the SONiC CLI by running vtysh. You can also start the CLI
directly with docker exec -it clab-nwkt-01-leaf-1 vtysh.
nwkt@Toni:~$ docker exec -it
clab-nwkt-01-leaf-1 bash
root@leaf-1:/#
root@leaf-1:/#
root@leaf-1:/# vtysh
Hello, this is FRRouting (version
10.0.1).
Copyright 1996-2005 Kunihiro Ishiguro, et al.
<snipped for brevity>
leaf-1#
leaf-1#
leaf-1# sh run
Building configuration...
Current configuration:
!
frr version 10.0.1
frr defaults traditional
hostname leaf-1
domainname localdomain
no ipv6 forwarding
no zebra nexthop kernel enable
fpm address 127.0.0.1
no fpm use-next-hop-groups
service integrated-vtysh-config
!
ip nht resolve-via-default
!
ipv6 nht resolve-via-default
!
end
leaf-1#
Example 2-9:
Log in to a node and open the SONiC CLI.