Sunday 28 April 2024

Single-AS EVPN Fabric with OSPF Underlay: Underlay Network Multicast Routing: Any-Source Multicast - ASM

 Underlay Network Multicast Routing: PIM-SM

In a traditional Layer 2 network, switches forward Intra-VLAN data traffic based on the destination MAC address of Ethernet frames. Therefore, hosts within the same VLAN must resolve each other's MAC-IP address bindings using Address Resolution Protocol (ARP). When a host wants to open a new IP connection with a device in the same subnet and the destination MAC address is unknown, the connection initiator generates an ARP Request message. In the message, the sender provides its own MAC-IP binding information and queries the MAC address of the owner of the target IP. The ARP Request messages are Layer 2 Broadcast messages with the destination MAC address FF:FF:FF:FF:FF:FF. 

EVPN Fabric is a routed network and requires a solution for Layer 2 Broadcast messages. We can select either BGP EVPN-based Ingress-Replication (IR) solution or enable Multicast routing in Underlay network. This chapter introduces the latter model. As in previous Unicast Routing section, we follow the Multicast deployment workflow of Nexus Dashboard Fabric Controller (NDFC) graphical user interface. 

Figure 2-4 depicts the components needed to deploy Multicast service in the Underlay network. The default option for selecting “RP mode” is ASM (Any-Source Multicast). ASM is a multicast service model where receivers join a multicast group by sending PIM-join messages to a Multicast group-specific Rendezvous Point(s) (RP). RP is a “meeting point” to which the multicast source sends traffic and which RP forwards down to the shared tree. This process creates a shared multicast tree from RP to receiver. The multicast-enabled routers, in turn, use Protocol Independent Multicast – Sparse Mode (PIM-SM) multicast routing protocol for forwarding multicast traffic from senders to receivers. In default operation mode, PIM-SM allows receivers to switch from the shared multicast tree to the source-specific multicast tree. The other option for RP mode is Bi-directional PIM (BiDir). It is a variant of PIM-SM where multicast traffic always goes from sender to RP and from RP down to receivers over the shared multicast tree. In EVPN Fabric, Leaf switches are both multicast senders (forward local TS ARP messages) and receivers (wants to receiver remote ARP generated by TS connected to remote Leaf switches).

In our example, we create multicast group 239.1.1.0/24 using Any-Source Multicast (ASM) on both spine switches. We publish our Anycast-RPs to Leaf switches using IP address 192.168.254.1 (Loopback 251). Finally, we enable Protocol Independent Multicast (PIM) Sparse Mode on all Inter-Switch links and Loopback Interfaces. 


Figure 2-4: EVPN Fabric Protocol, and Resources  - Broadcast and Unknown unicast.


Figure 2-5 on next page illustrates the multicast configuration of our example EVPN Fabric's underlay network. In this setup, spine switches serve as Rendezvous Points (RPs) for the multicast group 239.1.1.0/24. Spine-11 and Spine-12 publish the configured RP IP address 192.168.254.1/32 (Loopback 251) to Leaf switches. These spine switches form part of the same RP-Set group, identifying themselves using their Loopback 0 interface addresses. In the Leaf switch setup, we define that Multicast Groups 239.1.1.0/24 will use the Rendezvous Point 192.168.254.1. 

Leaf switches act as both senders and receivers of multicast traffic. They indicate their willingness to receive traffic from multicast group 239.1.1.0/24 by sending a PIM Join message towards RP using the destination IP address 224.0.0.13 (all PIM Routes). In the message, they specify the group they want to join. Leaf switches register themselves with the Rendezvous Point as multicast traffic sources, using a PIM Register message. They send PIM Register messages to the configured group-specific Rendezvous Point IP address.



Figure 2-5: EVPN Fabric Underlay Network Multicast Replication.


Configuration


Example 2-6 demonstrates the multicast configurations of the Spine switches. We enable the PIM protocol with the command feature pim. Then, we configure Loopback interface 251 and define the IP address as 192.168.254.1/32. We add this loopback to Unicast (OSPF) and Multicast (PIM-SM) routing processes. Besides, we enable Multicast routing on Loopback 0 and Inter-Switch interfaces. After interface configurations, we bind the RP address 192.168.254.1 to Multicast Group List 239.1.1.0/24 and create an Anycast-RP Set list where we list Spine switches sharing the RP address 192.168.254.1. Note that the switches attached to the Rendezvous Point group synchronize multicast sources registered to them. Synchronization information is accepted only for devices whose RP identifier is defined as the group's RP.

feature pim
!
interface loopback251
  description Anycast-RP-Shared
  ip address 192.168.254.1/32
  ip router ospf UNDERLAY-NET area 0.0.0.0
  ip pim sparse-mode
!
interface loopback 0
  ip pim sparse-mode
!
interface ethernet 1/1-4
  ip pim sparse-mode
!
ip pim rp-address 192.168.254.1 group-list 239.1.1.0/24
ip pim anycast-rp 192.168.254.1 192.168.0.11
ip pim anycast-rp 192.168.254.1 192.168.0.12

Example 2-6: Multicast Configuration - Spine-11 and Spine-12.


In Leaf switches, we first enable the PIM feature. Then, we include Loopback interfaces 0, 10, and 20 in multicast routing, as well as the Inter-Switch interfaces. Afterward, we specify the IP address of the multicast group-specific Rendezvous Point.


feature pim
!
ip pim rp-address 192.168.254.1 group-list 239.1.1.0/24
!
interface loopback 0
  ip pim sparse-mode
!
interface loopback 10
  ip pim sparse-mode
!
interface loopback 20
  ip pim sparse-mode
!
interface ethernet 1/1-2
  ip pim sparse-mode

Example 2-7: Multicast Configuration – Leaf-101 - 104.

In example 2-8, we can see that both Spine switches belong to the Anycast-RP 192.168.254.1 cluster. The RP-Set identifier IP address of Spine-11 is marked with an asterisk symbol (*). The command output also verifies that we have associated the Rendezvous Point with the Multicast Group Range 239.1.1.0/24. Example 2-9 verifies the RP-Multicast Group information from the Spine-12 perspective and the example from the Leaf-101 perspective. 
Spine-11# show ip pim rp vrf default
PIM RP Status Information for VRF "default"
BSR disabled
Auto-RP disabled
BSR RP Candidate policy: None
BSR RP policy: None
Auto-RP Announce policy: None
Auto-RP Discovery policy: None

Anycast-RP 192.168.254.1 members:
  192.168.0.11*  192.168.0.12

RP: 192.168.254.1*, (0),
 uptime: 00:06:24   priority: 255,
 RP-source: (local),
 group ranges:
 239.1.1.0/24
 
Example 2-8: RP-to-Multicast Group Mapping – Spine-11.


Spine-12# show ip pim rp vrf default
PIM RP Status Information for VRF "default"
BSR disabled
Auto-RP disabled
BSR RP Candidate policy: None
BSR RP policy: None
Auto-RP Announce policy: None
Auto-RP Discovery policy: None

Anycast-RP 192.168.254.1 members:
  192.168.0.11  192.168.0.12*

RP: 192.168.254.1*, (0),
 uptime: 00:05:51   priority: 255,
 RP-source: (local),
 group ranges:
 239.1.1.0/24

Example 2-9: RP-to-Multicast Group Mapping – Spine-12.


Leaf-101# show ip pim rp vrf default
PIM RP Status Information for VRF "default"
BSR disabled
Auto-RP disabled
BSR RP Candidate policy: None
BSR RP policy: None
Auto-RP Announce policy: None
Auto-RP Discovery policy: None

RP: 192.168.254.1, (0),
 uptime: 00:05:18   priority: 255,
 RP-source: (local),
 group ranges:
 239.1.1.0/24

Example 2-10: RP-to-Multicast Group Mapping – Leaf-101

Example 2-11 confirms that we have enabled PIM-SM on all necessary interfaces. Additionally, the example verifies that Spine-11 has established four PIM adjacencies over the Inter-Switch links Ethe1/1-4. Example 2-12 presents the same information from the viewpoint of Leaf-101.

Spine-11# show ip pim interface brief
PIM Interface Status for VRF "default"
Interface            IP Address      PIM DR Address  Neighbor  Border
                                                     Count     Interface
Ethernet1/1          192.168.0.11    192.168.0.101   1         no
Ethernet1/2          192.168.0.11    192.168.0.102   1         no
Ethernet1/3          192.168.0.11    192.168.0.103   1         no
Ethernet1/4          192.168.0.11    192.168.0.104   1         no
loopback0            192.168.0.11    192.168.0.11    0         no
loopback251          192.168.254.1   192.168.254.1   0         no

Example 2-11: Verification of PIM Interfaces – Spine-11.


Leaf-101# show ip pim interface brief
PIM Interface Status for VRF "default"
Interface            IP Address      PIM DR Address  Neighbor  Border
                                                     Count     Interface
Ethernet1/1          192.168.0.101   192.168.0.101   1         no
Ethernet1/2          192.168.0.101   192.168.0.101   1         no
loopback0            192.168.0.101   192.168.0.101   0         no

Example 2-12: Verification of PIM Interfaces – Leaf-101.

Example 2-13 provides more detailed information about the PIM neighbors of Spine-11.

Spine-11# show ip pim neighbor vrf default
PIM Neighbor Status for VRF "default"
Neighbor        Interface    Uptime    Expires   DR       Bidir-  BFD    ECMP Redirect
                                                 Priority Capable State     Capable
192.168.0.101   Ethernet1/1  00:11:29  00:01:41  1        yes     n/a     no
192.168.0.102   Ethernet1/2  00:10:39  00:01:35  1        yes     n/a     no
192.168.0.103   Ethernet1/3  00:10:16  00:01:29  1        yes     n/a     no
192.168.0.104   Ethernet1/4  00:09:58  00:01:18  1        yes     n/a     no
Example 2-13: Spine-11’s PIM Neighbors.

The "Mode" column in Example 2-14 is the initial evidence that we have deployed the Any-Source Multicast service.

Spine-11# show ip pim group-range
PIM Group-Range Configuration for VRF "default"
Group-range        Action Mode  RP-address      Shared-tree-range Origin
232.0.0.0/8        Accept SSM   -               -                 Local
239.1.1.0/24       -      ASM   192.168.254.1   -                 Static
Example 2-14: PIM Group Ranges.
The following three examples shows that Multicast Group 239.1.1.0/24 is not active yet. We will get back to this after EVPN Fabric is deployed and we have implemented our first EVPN segment.

Spine-11# show ip mroute
IP Multicast Routing Table for VRF "default"

(*, 232.0.0.0/8), uptime: 00:08:38, pim ip
  Incoming interface: Null, RPF nbr: 0.0.0.0
  Outgoing interface list: (count: 0)

Example 2-15: Multicast Routing Information Base (MRIB) – Spine-11.

Spine-12# show ip mroute
IP Multicast Routing Table for VRF "default"

(*, 232.0.0.0/8), uptime: 00:07:33, pim ip
  Incoming interface: Null, RPF nbr: 0.0.0.0
  Outgoing interface list: (count: 0)

Example 2-16: Multicast Routing Information Base (MRIB) – Spine-12.

Leaf-101# show ip mroute
IP Multicast Routing Table for VRF "default"

(*, 232.0.0.0/8), uptime: 00:06:29, pim ip
  Incoming interface: Null, RPF nbr: 0.0.0.0
  Outgoing interface list: (count: 0)

Example 2-17: Multicast Routing Information Base (MRIB) – Leaf-101.

Next, we configure the Border Gateway Protocol (BGP) as the control plane protocol for the EVPN Fabric's Overlay Network.  

Thursday 25 April 2024

Single-AS EVPN Fabric with OSPF Underlay: Underlay Network Unicast Routing

 Introduction


Image 2-1 illustrates the components essential for designing a Single-AS, Multicast-enabled OSPF Underlay EVPN Fabric. These components need to be established before constructing the EVPN fabric. I've grouped them into five categories based on their function.

  • General: Defines the IP addressing scheme for Spine-Leaf Inter-Switch links, set the BGP AS number and number of BGP Route-Reflectors, and set the MAC address for the Anycast gateway for client-side VLAN routing interfaces.
  • Replication: Specifies the replication mode for Broadcast, Unknown Unicast, and Multicast (BUM) traffic generated by Tenant Systems. The options are Ingress-Replication and Multicast (ASM or BiDir options).
  • vPC: Describes vPC multihoming settings such as vPC Peer Link VLAN ID and Port-Channel ID, vPC Auto-recovery and Delay Restore timers, and define vPC Peer Keepalive interface.
  • Protocol: Defines the numbering schema for Loopback interfaces, set the OSPF Area identifier, and OSPF process name.
  • Resources: Reserves IP address ranges for Loopback interfaces defined in the Protocols category and for the Rendezvous Point specified in the Replication category. Besides, in this section, we reserve Layer 2 and Layer 3 VXLAN and VLAN ranges for overlay network segments.

The model presented in Figure 2-1 outlines the steps for configuring an EVPN fabric using the Nexus Dashboard Fabric Controller (NDFC) “Create Fabric” tool. Each category in the image corresponds to a tab in the NDFC's Easy_Fabric_11_1 Fabric Template.


Figure 2-1: EVPN Fabric Network Side Building Blogs.


Underlay Network Unicast Routing


Let's start the deployment process of EVPN Fabric from the definitions of General, Protocol, and Resources categories for the Underlay network. We won't define a separate subnet for Spine-Leaf Inter-Switch links; instead, we'll use unnumbered interfaces. For the routing protocol in the Underlay network, we'll choose OSPF and define the process name (UNDERLAY-NET) and Area Identifier (0.0.0.0) in the Protocols category. In the Protocols category, we also define the numbering schema for Loopback addresses. The Underlay Routing Loopback ID will be 0 (for OSPF Router and Unnumbered Inter-Switch interface), the Overlay Network Loopback ID will be 10 (from BGP EVPN peering), and the Loopback ID for VXLAN tunneling will be 20 (Outer IP source and destination IP addresses for VXLAN Tunnel encapsulation ). In the Resources category, we'll reserve IP address ranges, and for each loopback interface, we'll assign addresses as follows: Loopback 0: 192.168.0.0/24, Loopback 10: 192.168.10.0/24, and Loopback 20: 192.168.20.0/24.



Figure 2-2: EVPN Fabric General, Protocol, and Resources Definitions.


Figure 2-3 illustrates the Loopback addresses we have chosen for the Leaf and Spine switches. For example, Let's take the Leaf-101 switch as an example. We have assigned the IP address 192.168.0.101/32 for the Loopback 0 interface, which Leaf-101 uses as both the OSPF Router ID and the Inter-Switch link IP address. For the Loopback 10 interface, we've assigned the IP address 192.168.10.101/32, which Leaf-101 uses as both the BGP router ID and the BGP EVPN peering address. For the Loopback 20 interface, we have assigned the IP address 192.168.20.101/32, which Leaf-101 uses as the outermost IP source/destination IP address in VXLAN tunneling. Note that the Loopback 20 address is configured only on Leaf switches. The OSPF process advertises all three Loopback addresses in LSA (Link State Advertisement) messages to all its OSPF neighbors, which then process and forward them to their own OSPF neighbors.



Figure 2-3: EVPN Fabric Loopback Interface IP Addressing.

CLI Configuration


Example 2-1 shows the underlay network configuration of the EVPN Fabric for Leaf-101. Enable the OSPF feature and create the OSPF process. Then, configure the Loopback interfaces, assign them IP addresses, and associate them with the OSPF process. After that, configure the Inter-Switch Link (ISL) interfaces Eth1/1 and Eth1/2 to use the IP address assigned to Loopback 0 interface 0: 192.168.0.101/23. Specify the interface media and OSPF network type as point-to-point and connect Eth1/1 to the OSPF process. 

The commands "name-lookup" under the OSPF process and global "ip host" commands allow pinging the defined IP addresses by name. Additionally, the "show ip ospf neighbor" command displays OSPF neighbors' names instead of IP addresses. These commands are optional.

conf t
!
hostname Leaf-101
!
feature ospf 
!
router ospf UNDERLAY-NET
  router-id 192.168.0.101
  name-lookup
!
ip host Leaf-101 192.168.0.101
ip host Leaf-102 192.168.0.102
ip host Leaf-103 192.168.0.103
ip host Leaf-104 192.168.0.104
ip host Spine-11 192.168.0.11
ip host Spine-12 192.168.0.12
!
interface loopback 0
 description ** OSPF RID & Inter-Sw links IP addressing **
 ip address 192.168.0.101/32
 ip router ospf UNDERLAY-NET area 0.0.0.0
!
interface loopback 10
 description ** Overlay ControlPlane - BGP EVPN **
 ip address 192.168.10.101/32
 ip router ospf UNDERLAY-NET area 0.0.0.0
!
interface loopback 20
 description ** Overlay DataPlane - VTEP **
 ip address 192.168.20.101/32
 ip router ospf UNDERLAY-NET area 0.0.0.0
!
interface Ethernet1/1-2
  no switchport
  medium p2p
  ip unnumbered loopback0
  ip ospf network point-to-point
  ip router ospf UNDERLAY-NET area 0.0.0.0
  no shutdown

Example 2-1: Leaf-101 - Underlay Network Configuration.

Verifications

Example 2-2 shows that the Leaf-101 switch's Ethernet interfaces 1/1 and 1/2, and all three Loopback interfaces, belong to the OSPF process UNDERLAY-NET in OSPF area 0.0.0.0. The OSPF network type for Ethernet interfaces is set to point-to-point. The example also verifies that the Leaf-101 switch has two OSPF neighbors, Spine-11, and Spine-12.


Leaf-101# show ip ospf interface brief ; show ip ospf neighbors ;
--------------------------------------------------------------------------------
 OSPF Process ID UNDERLAY-NET VRF default
 Total number of interface: 5
 Interface               ID     Area            Cost   State    Neighbors Status
 Eth1/1                  4      0.0.0.0         40     P2P      1         up
 Eth1/2                  5      0.0.0.0         40     P2P      1         up
 Lo0                     1      0.0.0.0         1      LOOPBACK 0         up
 Lo10                    2      0.0.0.0         1      LOOPBACK 0         up
 Lo20                    3      0.0.0.0         1      LOOPBACK 0         up
--------------------------------------------------------------------------------
 OSPF Process ID UNDERLAY-NET VRF default
 Total number of neighbors: 2
 Neighbor ID     Pri State            Up Time  Address         Interface
 Spine-11          1 FULL/ -          00:00:30 192.168.0.11    Eth1/1
 Spine-12          1 FULL/ -          00:00:30 192.168.0.12    Eth1/2

Example 2-2: Leaf-101 show ip ospf neighbors.


Example 2-3 on the next page displays the OSPF Link State Database (LSDB) for the Leaf-101 switch. The first section shows that all switches in the EVPN Fabric have sent descriptions of their OSPF links. Each Spine switch has six OSPF interfaces (2 x Loopback interfaces and 4 x Ethernet interfaces), while each Leaf switch has five OSPF interfaces (3 x Loopback interfaces and 2 x Ethernet interfaces). The second section provides detailed OSPF link descriptions for the Spine-11 switch.

Leaf-101# sh ip ospf database ; show ip ospf database 192.168.0.11 detail
--------------------------------------------------------------------------------
        OSPF Router with ID (Leaf-101) (Process ID UNDERLAY-NET VRF default)
                Router Link States (Area 0.0.0.0)
Link ID         ADV Router      Age        Seq#       Checksum Link Count
192.168.0.11    Spine-11        51         0x8000012c 0x3fcd   6
192.168.0.12    Spine-12        51         0x8000012c 0x4fb9   6
192.168.0.101   Leaf-101        50         0x8000012e 0x9adf   5
192.168.0.102   Leaf-102        615        0x8000012c 0xd0a6   5
192.168.0.103   Leaf-103        607        0x8000012c 0x036f   5
192.168.0.104   Leaf-104        599        0x8000012c 0x3538   5
--------------------------------------------------------------------------------
        OSPF Router with ID (Leaf-101) (Process ID UNDERLAY-NET VRF default)
                Router Link States (Area 0.0.0.0)
   LS age: 51
   Options: 0x2 (No TOS-capability, No DC)
   LS Type: Router Links
   Link State ID: 192.168.0.11
   Advertising Router: Spine-11
   LS Seq Number: 0x8000012c
   Checksum: 0x3fcd
   Length: 96
    Number of links: 6

     Link connected to: a Stub Network
      (Link ID) Network/Subnet Number: 192.168.0.11
      (Link Data) Network Mask: 255.255.255.255
       Number of TOS metrics: 0
         TOS   0 Metric: 1

     Link connected to: a Stub Network
      (Link ID) Network/Subnet Number: 192.168.10.11
      (Link Data) Network Mask: 255.255.255.255
       Number of TOS metrics: 0
         TOS   0 Metric: 1

     Link connected to: a Router (point-to-point)
     (Link ID) Neighboring Router ID: 192.168.0.101
     (Link Data) Router Interface address: 0.0.0.3
       Number of TOS metrics: 0
         TOS   0 Metric: 40

     Link connected to: a Router (point-to-point)
     (Link ID) Neighboring Router ID: 192.168.0.102
     (Link Data) Router Interface address: 0.0.0.4
       Number of TOS metrics: 0
         TOS   0 Metric: 40

     Link connected to: a Router (point-to-point)
     (Link ID) Neighboring Router ID: 192.168.0.103
     (Link Data) Router Interface address: 0.0.0.5
       Number of TOS metrics: 0
         TOS   0 Metric: 40

     Link connected to: a Router (point-to-point)
     (Link ID) Neighboring Router ID: 192.168.0.104
     (Link Data) Router Interface address: 0.0.0.6
       Number of TOS metrics: 0
         TOS   0 Metric: 40

Example 2-3: Leaf-101 – OSPF Links State Database.


Example 2-4 confirms that the Leaf-101 switch has run the Dijkstra algorithm against the LSDB and installed the best routes into the Unicast routing table. Note that for all Leaf switch Loopback IP addresses, there are two equal-cost paths via both Spine switches.


Leaf-101# show ip route ospf
IP Route Table for VRF "default"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
192.168.0.11/32, ubest/mbest: 1/0
    *via 192.168.0.11, Eth1/1, [110/41], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.0.12/32, ubest/mbest: 1/0
    *via 192.168.0.12, Eth1/2, [110/41], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.0.102/32, ubest/mbest: 2/0
    *via 192.168.0.11, Eth1/1, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
    *via 192.168.0.12, Eth1/2, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.0.103/32, ubest/mbest: 2/0
    *via 192.168.0.11, Eth1/1, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
    *via 192.168.0.12, Eth1/2, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.0.104/32, ubest/mbest: 2/0
    *via 192.168.0.11, Eth1/1, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
    *via 192.168.0.12, Eth1/2, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.10.11/32, ubest/mbest: 1/0
    *via 192.168.0.11, Eth1/1, [110/41], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.10.12/32, ubest/mbest: 1/0
    *via 192.168.0.12, Eth1/2, [110/41], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.10.102/32, ubest/mbest: 2/0
    *via 192.168.0.11, Eth1/1, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
    *via 192.168.0.12, Eth1/2, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.10.103/32, ubest/mbest: 2/0
    *via 192.168.0.11, Eth1/1, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
    *via 192.168.0.12, Eth1/2, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.10.104/32, ubest/mbest: 2/0
    *via 192.168.0.11, Eth1/1, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
    *via 192.168.0.12, Eth1/2, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.20.102/32, ubest/mbest: 2/0
    *via 192.168.0.11, Eth1/1, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
    *via 192.168.0.12, Eth1/2, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.20.103/32, ubest/mbest: 2/0
    *via 192.168.0.11, Eth1/1, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
    *via 192.168.0.12, Eth1/2, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
192.168.20.104/32, ubest/mbest: 2/0
    *via 192.168.0.11, Eth1/1, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra
    *via 192.168.0.12, Eth1/2, [110/81], 00:06:40, ospf-UNDERLAY-NET, intra

Example 2-4: Leaf-101 – Unicast Routing Table.

Example 2-5 confirms that the Leaf-101 switch has an IP connectivity to all Fabric switches' Loopback 0 interfaces. Note that I've added dashes for clarity.


Leaf-101#ping Spine-11 ; ping Spine-12 ; ping Leaf-102 ; ping Leaf-103 ; ping Leaf-104
PING Spine-11 (192.168.0.11): 56 data bytes
64 bytes from 192.168.0.11: icmp_seq=0 ttl=254 time=4.715 ms
64 bytes from 192.168.0.11: icmp_seq=1 ttl=254 time=4.909 ms
<3 x ICMP replies have been removed to fit the entire output on one page>
--- Spine-11 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 1.849/3.369/4.909 ms
-----------------------------------------------------------------------
PING Spine-12 (192.168.0.12): 56 data bytes
64 bytes from 192.168.0.12: icmp_seq=0 ttl=254 time=3.14 ms
64 bytes from 192.168.0.12: icmp_seq=1 ttl=254 time=2.486 ms
<3 x ICMP replies have been removed to fit the entire output on one page>
--- Spine-12 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 1.896/2.279/3.14 ms
-----------------------------------------------------------------------
PING Leaf-102 (192.168.0.102): 56 data bytes
64 bytes from 192.168.0.102: icmp_seq=0 ttl=253 time=6.124 ms
64 bytes from 192.168.0.102: icmp_seq=1 ttl=253 time=4.663 ms
<3 x ICMP replies have been removed to fit the entire output on one page>
--- Leaf-102 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 4.663/5.56/6.794 ms
-----------------------------------------------------------------------
PING Leaf-103 (192.168.0.103): 56 data bytes
64 bytes from 192.168.0.103: icmp_seq=0 ttl=253 time=6.601 ms
64 bytes from 192.168.0.103: icmp_seq=1 ttl=253 time=7.512 ms
<3 x ICMP replies have been removed to fit the entire output on one page>
--- Leaf-103 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 3.674/5.892/7.512 ms
-----------------------------------------------------------------------
PING Leaf-104 (192.168.0.104): 56 data bytes
64 bytes from 192.168.0.104: icmp_seq=0 ttl=253 time=7.109 ms
64 bytes from 192.168.0.104: icmp_seq=1 ttl=253 time=7.777 ms
<3 x ICMP replies have been removed to fit the entire output on one page>
--- Leaf-104 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 5.869/6.822/7.777 ms
Leaf-101#

Example 2-5: Pinging to all Fabric switches Loopback 0 interfaces from Leaf-101.


In the next post, we configure IP-PIM Any-Source Multicast (ASM) routing in the Underlay network. 


Monday 22 April 2024

BGP EVPN with VXLAN: Fabric Overview

 




Figure illustrates the simplified operation model of EVPN Fabric. At the bottom of the figure is four devices, Tenant Systems (TS), connected to the network. When speaking about TS, I am referring to physical or virtual hosts. Besides, The Tenant System can be a forwarding component attached to one or more Tenant-specific Virtual Networks. Examples of TS forwarding components include firewalls, load balancers, switches, and routers.

We have connected TS1 and TS2 to VLAN 10 and TS3-4 to VLAN 20. VLAN 10 is associated with EVPN Instance (EVI) 10010 and VLAN 20 to EVI 10020. Note that VLAN-Id is switch-specific, while EVI is Fabric-wide. Thus, subnet A can have VLAN-Id XX on one Leaf switch and VLAN-Id YY on another. However, we must map both VLAN XX and YY to the same EVPN Instance.

When a TS connected to the Fabric sends the first Ethernet frame, the Leaf switch stores the source MAC address in the MAC address table, where it is copied to the Layer 2 routing table (L2RIB) of the EVPN Instance. Then, the BGP process of the Leaf switch advertises the MAC address with its reachability information to its BGP EVPN peers, essentially the Spine switches. The Spine switches propagate the BGP Update message to their own BGP peers, essentially the Leaf switches. The Leaf switches install the received MAC address into the L2RIB of the EVI, from which the MAC address is copied to the VLAN MAC address table associated with the EVPN Instance. For TS1 and TS2 in the same VLAN to start communication, the operation must occur in the other direction (TS2 MAC learning process). The operation described above is a Control Plane operation.

The traffic between TS1 and TS2 passes through switches Leaf-101, Spine, and Leaf-102. Leaf-101 encapsulates the Ethernet data frame sent by TS1 with MAC (spine)/IP(Leaf-102)/UDP (port 4789) headers and a VXLAN header that identifies the EVPN instance using the Layer 2 Virtual Network Identifier (L2VNI). Upon verifying that the destination address of the outer IP frame belongs to it, Leaf-102 removes the tunnel encapsulation and forwards only the original Ethernet frame to TS2.

VPN Instances associated with the same Tenant/VRF Context share a common L3VNI over which Ethernet frames from different segments are sent using the L3VNI identifier. To route traffic between two EVPN segments, each VLAN naturally must have a routing interface. A VLAN routing interface is configured on each Leaf switch, which is associated with the same Anycast Gateway MAC address. In EVPN Fabric, gateway redundancy does not rely on HSRP, VRRP, or GLBP protocols. Instead, the gateway is configured on every Leaf switch, where we have deployed the VLAN. EVPN routing solution between EVPN segments is called Integrated Routing and Bridging (IRB). Cisco Nexus switches use Symmetric IRB (I will explain its operation in upcoming chapters). 

Yritän jatkuvasti kehittää yksinkertaisempia ja selkeämpiä tapoja kuvata EVPN Fabricin toimintamallia. Tässä on jälleen yksi. Tällä kertaa julkaisen artikkelin ainoastaan Linkedin artikkelina, en omassa blogissani (ainakaan vielä). Seuraavassa artikkelissa sovellan samaa mallia esitellessnäi EVPN Fabrikin configuroimisen.