Sunday 28 April 2024

Single-AS EVPN Fabric with OSPF Underlay: Underlay Network Multicast Routing: Any-Source Multicast - ASM

 Underlay Network Multicast Routing: PIM-SM

In a traditional Layer 2 network, switches forward Intra-VLAN data traffic based on the destination MAC address of Ethernet frames. Therefore, hosts within the same VLAN must resolve each other's MAC-IP address bindings using Address Resolution Protocol (ARP). When a host wants to open a new IP connection with a device in the same subnet and the destination MAC address is unknown, the connection initiator generates an ARP Request message. In the message, the sender provides its own MAC-IP binding information and queries the MAC address of the owner of the target IP. The ARP Request messages are Layer 2 Broadcast messages with the destination MAC address FF:FF:FF:FF:FF:FF. 

EVPN Fabric is a routed network and requires a solution for Layer 2 Broadcast messages. We can select either BGP EVPN-based Ingress-Replication (IR) solution or enable Multicast routing in Underlay network. This chapter introduces the latter model. As in previous Unicast Routing section, we follow the Multicast deployment workflow of Nexus Dashboard Fabric Controller (NDFC) graphical user interface. 

Figure 2-4 depicts the components needed to deploy Multicast service in the Underlay network. The default option for selecting “RP mode” is ASM (Any-Source Multicast). ASM is a multicast service model where receivers join a multicast group by sending PIM-join messages to a Multicast group-specific Rendezvous Point(s) (RP). RP is a “meeting point” to which the multicast source sends traffic and which RP forwards down to the shared tree. This process creates a shared multicast tree from RP to receiver. The multicast-enabled routers, in turn, use Protocol Independent Multicast – Sparse Mode (PIM-SM) multicast routing protocol for forwarding multicast traffic from senders to receivers. In default operation mode, PIM-SM allows receivers to switch from the shared multicast tree to the source-specific multicast tree. The other option for RP mode is Bi-directional PIM (BiDir). It is a variant of PIM-SM where multicast traffic always goes from sender to RP and from RP down to receivers over the shared multicast tree. In EVPN Fabric, Leaf switches are both multicast senders (forward local TS ARP messages) and receivers (wants to receiver remote ARP generated by TS connected to remote Leaf switches).

In our example, we create multicast group 239.1.1.0/24 using Any-Source Multicast (ASM) on both spine switches. We publish our Anycast-RPs to Leaf switches using IP address 192.168.254.1 (Loopback 251). Finally, we enable Protocol Independent Multicast (PIM) Sparse Mode on all Inter-Switch links and Loopback Interfaces. 


Figure 2-4: EVPN Fabric Protocol, and Resources  - Broadcast and Unknown unicast.


Figure 2-5 on next page illustrates the multicast configuration of our example EVPN Fabric's underlay network. In this setup, spine switches serve as Rendezvous Points (RPs) for the multicast group 239.1.1.0/24. Spine-11 and Spine-12 publish the configured RP IP address 192.168.254.1/32 (Loopback 251) to Leaf switches. These spine switches form part of the same RP-Set group, identifying themselves using their Loopback 0 interface addresses. In the Leaf switch setup, we define that Multicast Groups 239.1.1.0/24 will use the Rendezvous Point 192.168.254.1. 

Leaf switches act as both senders and receivers of multicast traffic. They indicate their willingness to receive traffic from multicast group 239.1.1.0/24 by sending a PIM Join message towards RP using the destination IP address 224.0.0.13 (all PIM Routes). In the message, they specify the group they want to join. Leaf switches register themselves with the Rendezvous Point as multicast traffic sources, using a PIM Register message. They send PIM Register messages to the configured group-specific Rendezvous Point IP address.



Figure 2-5: EVPN Fabric Underlay Network Multicast Replication.


Configuration


Example 2-6 demonstrates the multicast configurations of the Spine switches. We enable the PIM protocol with the command feature pim. Then, we configure Loopback interface 251 and define the IP address as 192.168.254.1/32. We add this loopback to Unicast (OSPF) and Multicast (PIM-SM) routing processes. Besides, we enable Multicast routing on Loopback 0 and Inter-Switch interfaces. After interface configurations, we bind the RP address 192.168.254.1 to Multicast Group List 239.1.1.0/24 and create an Anycast-RP Set list where we list Spine switches sharing the RP address 192.168.254.1. Note that the switches attached to the Rendezvous Point group synchronize multicast sources registered to them. Synchronization information is accepted only for devices whose RP identifier is defined as the group's RP.

feature pim
!
interface loopback251
  description Anycast-RP-Shared
  ip address 192.168.254.1/32
  ip router ospf UNDERLAY-NET area 0.0.0.0
  ip pim sparse-mode
!
interface loopback 0
  ip pim sparse-mode
!
interface ethernet 1/1-4
  ip pim sparse-mode
!
ip pim rp-address 192.168.254.1 group-list 239.1.1.0/24
ip pim anycast-rp 192.168.254.1 192.168.0.11
ip pim anycast-rp 192.168.254.1 192.168.0.12

Example 2-6: Multicast Configuration - Spine-11 and Spine-12.


In Leaf switches, we first enable the PIM feature. Then, we include Loopback interfaces 0, 10, and 20 in multicast routing, as well as the Inter-Switch interfaces. Afterward, we specify the IP address of the multicast group-specific Rendezvous Point.


feature pim
!
ip pim rp-address 192.168.254.1 group-list 239.1.1.0/24
!
interface loopback 0
  ip pim sparse-mode
!
interface loopback 10
  ip pim sparse-mode
!
interface loopback 20
  ip pim sparse-mode
!
interface ethernet 1/1-2
  ip pim sparse-mode

Example 2-7: Multicast Configuration – Leaf-101 - 104.

In example 2-8, we can see that both Spine switches belong to the Anycast-RP 192.168.254.1 cluster. The RP-Set identifier IP address of Spine-11 is marked with an asterisk symbol (*). The command output also verifies that we have associated the Rendezvous Point with the Multicast Group Range 239.1.1.0/24. Example 2-9 verifies the RP-Multicast Group information from the Spine-12 perspective and the example from the Leaf-101 perspective. 
Spine-11# show ip pim rp vrf default
PIM RP Status Information for VRF "default"
BSR disabled
Auto-RP disabled
BSR RP Candidate policy: None
BSR RP policy: None
Auto-RP Announce policy: None
Auto-RP Discovery policy: None

Anycast-RP 192.168.254.1 members:
  192.168.0.11*  192.168.0.12

RP: 192.168.254.1*, (0),
 uptime: 00:06:24   priority: 255,
 RP-source: (local),
 group ranges:
 239.1.1.0/24
 
Example 2-8: RP-to-Multicast Group Mapping – Spine-11.


Spine-12# show ip pim rp vrf default
PIM RP Status Information for VRF "default"
BSR disabled
Auto-RP disabled
BSR RP Candidate policy: None
BSR RP policy: None
Auto-RP Announce policy: None
Auto-RP Discovery policy: None

Anycast-RP 192.168.254.1 members:
  192.168.0.11  192.168.0.12*

RP: 192.168.254.1*, (0),
 uptime: 00:05:51   priority: 255,
 RP-source: (local),
 group ranges:
 239.1.1.0/24

Example 2-9: RP-to-Multicast Group Mapping – Spine-12.


Leaf-101# show ip pim rp vrf default
PIM RP Status Information for VRF "default"
BSR disabled
Auto-RP disabled
BSR RP Candidate policy: None
BSR RP policy: None
Auto-RP Announce policy: None
Auto-RP Discovery policy: None

RP: 192.168.254.1, (0),
 uptime: 00:05:18   priority: 255,
 RP-source: (local),
 group ranges:
 239.1.1.0/24

Example 2-10: RP-to-Multicast Group Mapping – Leaf-101

Example 2-11 confirms that we have enabled PIM-SM on all necessary interfaces. Additionally, the example verifies that Spine-11 has established four PIM adjacencies over the Inter-Switch links Ethe1/1-4. Example 2-12 presents the same information from the viewpoint of Leaf-101.

Spine-11# show ip pim interface brief
PIM Interface Status for VRF "default"
Interface            IP Address      PIM DR Address  Neighbor  Border
                                                     Count     Interface
Ethernet1/1          192.168.0.11    192.168.0.101   1         no
Ethernet1/2          192.168.0.11    192.168.0.102   1         no
Ethernet1/3          192.168.0.11    192.168.0.103   1         no
Ethernet1/4          192.168.0.11    192.168.0.104   1         no
loopback0            192.168.0.11    192.168.0.11    0         no
loopback251          192.168.254.1   192.168.254.1   0         no

Example 2-11: Verification of PIM Interfaces – Spine-11.


Leaf-101# show ip pim interface brief
PIM Interface Status for VRF "default"
Interface            IP Address      PIM DR Address  Neighbor  Border
                                                     Count     Interface
Ethernet1/1          192.168.0.101   192.168.0.101   1         no
Ethernet1/2          192.168.0.101   192.168.0.101   1         no
loopback0            192.168.0.101   192.168.0.101   0         no

Example 2-12: Verification of PIM Interfaces – Leaf-101.

Example 2-13 provides more detailed information about the PIM neighbors of Spine-11.

Spine-11# show ip pim neighbor vrf default
PIM Neighbor Status for VRF "default"
Neighbor        Interface    Uptime    Expires   DR       Bidir-  BFD    ECMP Redirect
                                                 Priority Capable State     Capable
192.168.0.101   Ethernet1/1  00:11:29  00:01:41  1        yes     n/a     no
192.168.0.102   Ethernet1/2  00:10:39  00:01:35  1        yes     n/a     no
192.168.0.103   Ethernet1/3  00:10:16  00:01:29  1        yes     n/a     no
192.168.0.104   Ethernet1/4  00:09:58  00:01:18  1        yes     n/a     no
Example 2-13: Spine-11’s PIM Neighbors.

The "Mode" column in Example 2-14 is the initial evidence that we have deployed the Any-Source Multicast service.

Spine-11# show ip pim group-range
PIM Group-Range Configuration for VRF "default"
Group-range        Action Mode  RP-address      Shared-tree-range Origin
232.0.0.0/8        Accept SSM   -               -                 Local
239.1.1.0/24       -      ASM   192.168.254.1   -                 Static
Example 2-14: PIM Group Ranges.
The following three examples shows that Multicast Group 239.1.1.0/24 is not active yet. We will get back to this after EVPN Fabric is deployed and we have implemented our first EVPN segment.

Spine-11# show ip mroute
IP Multicast Routing Table for VRF "default"

(*, 232.0.0.0/8), uptime: 00:08:38, pim ip
  Incoming interface: Null, RPF nbr: 0.0.0.0
  Outgoing interface list: (count: 0)

Example 2-15: Multicast Routing Information Base (MRIB) – Spine-11.

Spine-12# show ip mroute
IP Multicast Routing Table for VRF "default"

(*, 232.0.0.0/8), uptime: 00:07:33, pim ip
  Incoming interface: Null, RPF nbr: 0.0.0.0
  Outgoing interface list: (count: 0)

Example 2-16: Multicast Routing Information Base (MRIB) – Spine-12.

Leaf-101# show ip mroute
IP Multicast Routing Table for VRF "default"

(*, 232.0.0.0/8), uptime: 00:06:29, pim ip
  Incoming interface: Null, RPF nbr: 0.0.0.0
  Outgoing interface list: (count: 0)

Example 2-17: Multicast Routing Information Base (MRIB) – Leaf-101.

Next, we configure the Border Gateway Protocol (BGP) as the control plane protocol for the EVPN Fabric's Overlay Network.  

No comments:

Post a Comment