Tuesday 15 December 2020

Object-based Approach to Cisco ACI:

 A Guide to Understand the Logic Behind Application Centric Infrastructure 

This book will be soon available

Click "read more >>" to open the Table of Contents and see the About the Book section

Sunday 25 October 2020

ACI Fabric Access Policies Part 4: Leaf Interface Profile, Leaf Switch Policy Group, and Leaf Switch Profile,


Leaf Interface Profile

 

This section explains how to create an object Interface Profile whose basic purpose is to attach the set of physical interfaces into this object. Phase 6 in Figure 1-40 illustrates the APIC Management Information Model (MIM) from the Interface Profile perspective. We are adding an object L101__102_IPR under the class AccPortP (Leaf Interface Profile). The name of the object includes Leaf switch identifiers (Leaf-101 and Leaf-102) in which I am going to use this Interface Profile. This object has a Child object Eth1_1-5 (class InfraHPorts) that defines the internet block and which has a relationship with the object Port_Std_ESXi-Host_IPG. By doing this we state that ethernet interfaces 1/1-5 are LLDP enabled 10Gbps ports which can use VLAN Identifiers from 300-399. Note that in this phase we haven’t yet specified in which switches we are using this Interface Profile.

 The RN rules used with related objects:

 Objects created under the class InfraAccportP (Leaf Interface Profile):Prefix1-{name}, where the Prefix1 is “accportprof”. This gives us RN “accportprof-L101_L102_IPR”.

 Objects created under the class InfraHPortS (Access Port Selector): Prefix1-{name}-Prefix2-{type}, where the Prefix1 is “hports” and the Prefix2 is “typ”. This gives us RN “hports-Eth1_1-5_typ-range”.

Objects created under the class InfraPortBlk (Access Port Block): Prefix1-{name}, where the Prefix1 is “portblk” and where the name is Property (autogenerated). This gives us the RN “portblk-Block2”.



Figure 1-39: APIC MIM Reference: Interface Profile.

ACI Fabric Access Policies Part 3: AAEP, Interface Policy and Interface Policy Group

 

Attachable Access Entity Profile - AAEP


This section explains how to create an object Attachable Access Entity Profile (AAEP) that is used for attaching a Domain into Port Group. Phase 3 in Figure 1-20 illustrates the APIC Management Information Model (MIM) from the AAEP perspective. Class AttEntityP is a Child class for infra, and they both belong to packages Infra. I have already added the object attentp-AEP_PHY into the figure. The format of the RN for this object is Prefix1-{name}, where the Prefix1 is attentp. This gives us the RN attentp-PHY-AEP.



Figure 1-20: APIC MIM Reference: Attachment Access Entity Profile.

Thursday 22 October 2020

ACI Fabric Access Policies Part 2: Physical Domain

 Physical Domain

This section explains how to create a Physical Domain (Fabric Access Policy). It starts by mapping the REST call POST method and JSON Payload into Fabric Access Policy modeling. Then it explains how the same configurations can be done by using the APIC GUI. Phase 2 in Figure 1-15 illustrates the APIC Management Information Model (MIM) from the Physical Domain perspective. I have already added the object Phys-Standalone_ESXi_PHY into the figure. The format of the RN for this object is Prefix1-{name}, where the Prefix1 is “phys”. This gives us the RN “phys-Standalone_ESXi_PHY”.



Figure 1-15: Fabric Access Policy Modeling: Physical Domain (click image to enlarge).


Wednesday 21 October 2020

ACI Fabric Access Policies Part 1: VLAN Pool

 

Introduction

 

Everything in ACI is managed as an Object. Each object belongs to a certain Class. As an example, when we create a VLAN Pool, we create an object that belongs to Class VlanInstP. Classes, in turn, are organized in Packages, Class VlanInstP belongs to Package fvns (fv = fabric virtualization, ns namespace). Figure 1-1 illustrates the classes that we are using in this chapter when we create Fabric Access Policies. Lines with an arrow represent Parent-Child structure and dotted lines represent a relationship (Rs) between classes. We will get back to Rs in becoming sections.



Figure 1-1: ACI Fabric Access Policies.

Saturday 5 September 2020

VXLAN Fabric with BGP EVPN Control-Plane: Design Considerations - Book Description and ToC





About this book

 

The intent of this book is to explain various design models for Overlay Network and Underlay Network used in VXLAN Fabric with BGP EVPN Control-Plane. The first two chapters are focusing on the Underlay Network solution. The OSPF is introduced first. Among other things, the book explains how OSPF flooding can be minimized with area design. After OSPF there is a chapter about BGP in the Underlay network. Both OSPF and BGP are covered deeply and things like convergence are discussed. After the Underlay Network part, the book focuses on BGP design. It explains the following models: (a) BGP Multi-AS with OSPF Underlay, this chapter discusses two design models – Shared Spine ASN and Unique Spien ASN, (b) BGP-Only Multi-ASN where both direct and loopback overlay BGP peering models are explained, (c) Single-ASN with OSPF Underlay, (d) Hybrid-ASN with OSPF Underlay – Pod-specific shared ASN connected via Super-Spine layer using eBGP peering, (e) Dual-ASN model where leafs share the same ASN, and spines share their ASN. Each of the design model chapters includes a “Complexity Map” that should help readers to understand the complexity of each solution. This book also explains BGP ECMP and related to ECMP, the book also covers ESI Multihoming. The last chapter introduces how two Pods, can also be geographically dispersed DCs, can be connected using Layer 3 only DCI with MPLS.

 

I am using 5-stage Clos topology throughout the book. Some solutions are though explained by using only three switches for the sake of simplicity. I am also using IP-Only Underlay Network with Ingress-Replication, so this book does not cover Underlay Network Multicast solution. Besides, I am not covering DCI using Layer 2 Border Gateway (BGW) or Overlay Tenant Routing Multicast solution in this book because those, among the Underlay Multicast solutions, are covered in my first book “Virtual Extensible LAN – VXLAN: A Practical Guide to VXLAN solution” that is available at Amazon and Leanpub.

 

I wanted to keep the focus of the book fairly narrow and concentrate on the Control-Plane design and functionality. Please be aware that this book does not give any recommendation to which solution is the best and which is not. It is the readers' responsibility to find that out and selects the best solution for their needs. The book includes 66 full-color images, 260 configuration/show command examples, and 32 packet captures.


Table of Contents viii

Chapter 1:  Underlay Network with OSPF 1

Introduction 1

Infrastructure AS Numbering and IP Addressing Scheme 1

OSPF Neighbor Process 2

OSPF Neighbor Process: Init 3

OSPF Neighbor Process: ExStart 7

OSPF Neighbor Process: Exchange and Full 9

Shortest-Path First (SPF)/Dijkstra Algorithm 18

SPF Run – Phase I: Building a Shortest-Path Tree 19

First iteration round 20

Second iteration round 21

Third iteration round 24

SPF Run – Phase II: Adding Leafs to Shortest-Path Tree 25

Convergence 26

Flood reduction with multiple OSPF Areas 30

OSPF summarization in ABR 40

Removing OSPF Router from the Datapath 43

LSA and SPF timers 47

LSA Throttling Timer 47

Flood Pacing Timer 49

LSA Group Pacing Timer 50

Summary 51

References 52


Chapter 2:  Underlay Network with BGP 53

Introduction 53

Infrastructure AS Numbering and IP Addressing Scheme 54

BGP Configuration 55

Leaf Switches 55

Spine Switches 56

Super-Spine Switches 56

BGP Neighbor Process 57

Idle 57

Connect 57

Active 57

Finalizing negotiation of the TCP connection 58

OpenSent and OpenConfirm 61

Established 61

BGP NLRI Update Process 65

RIB to Adj-RIB-Out (Pre-Policy) 65

Adj-RIB-Out (Pre) to Adj-RIB-Out (Post) 65

Adj-RIB-In (Post) to Adj-RIB-In (Pre) 66

Adj-RIB-In (Pre) to Loc-RIB 66

Loc-RIB to RIB 66

BGP Update: Unreachable Destination 70

MRAI Timer 71

BGP AS-Path Prepend 71

OSPF and BGP Comparison 75

References 78


Chapter 3:  BGP Multi-AS with OSPF Underlay 79

Introduction 79

Inter-Switch Link IP addressing 80

Underlay Network Routing with OSPF 81

Overlay Network BGP L2VPN EVPN Peering 83

Adding L2VN segment 86

Routing comparison: Spine Sharing ASN vs. Unique ASN 88

Spine Switches Sharing ASN 88

All Switches in Unique ASN 94

BGP convergence: Group of Spines in the same AS 101

BGP convergence: All switches in unique AS 106

Complexity Chart of Multi-ASN Design with OSPF Underlay 113

Spines in shared ASN – OSPF Underlay 113

All switches in unique ASN - OSPF Underlay 114

References 115


Chapter 4:  BGP Only Multi-ASN Design 117

Introduction 117

Underlay: Direct Peering – Overlay: Loopback 117

Underlay: Direct Peering – Overlay: Direct Peering 125

Complexity Chart Multi-ASN Design with eBGP Underlay 132

Direct Underlay Peering – Loopback Overlay Peering 132

Direct Underlay Peering – Direct Overlay Peering 133


Chapter 5:  Single AS Model with OSPF Underlay 135

Introduction 135

Configuration 136

BGP Policy and BGP Update Configuration 136

Leaf Switches 136

Spine Switches 137

Super-Spine Switches 138

Verification 140

BGP L2VPN EVPN Peering 140

BGP Table Verification 140

Inconsistency Problem with Received Route Count 142

Fixing the Problem 148

Re-checking of BGP Tables 151

NVE Peering 154

MAC Address Table and L2RIB 156

Data-Plane Testing 158

Complexity Chart 159

Single-AS Design with OSPF Underlay 159

Chapter 6:  Hybrid AS Model with OSPF Underlay 161

Introduction 161

Configuration 162

Leaf – BGP Policy and BGP Update settings 162

Spine - BGP Adjacency and BGP Update settings 163

SuperSpine - BGP Adjacency and BGP Update settings 166

Verification 170

Complexity Chart of Hybrid-ASN Design 186

Direct Underlay Peering – Loopback Overlay Peering 186


Chapter 7:  Dual-AS Model with OSPF Underlay 188

Introduction 188

Configuration 189

BGP Adjacency Policy 189

BGP Update Message Modification 189

BGP Loop Prevention Adjustment 190

Verification 192

BGP peering 192

BGP table 192

L2RIB 195

MAC Address Table 196

Complexity Chart of Hybrid-ASN Design with OSPF Underlay 197


Chapter 8:  ESI Multi-Homing 198

Introduction 198

ESI Multihoming Configuration 199

Designated Forwarder fo L2BUM 201

Mass-Withdraw 205

Load-Balancing 213

References 216

Chapter 9:  ECMP process 217

ECMP process 217


Chapter 10: L3-Only Inter-Pod Connection 227

Introduction 227

MPLS Core Underlay Routing with IS-IS. 228

IS-IS Configuration 229

IS-IS Verification 229

MPLS Label Distribution with LDP 231

MPLS LDP Configuration 233

MPLS Verification 233

MPLS Control-Plane Operation - LDP 235

MPLS Data-Plane Operation – Label Switching 236

BGP VPNv4 Peering 238

BGP VPNv4 Configuration 238

BGP VPNv4 Peering Verification 239

BGP L2VPN EVPN Peering 240

BGP VPNv4 Configuration 240

BGP L2VPN EVPN Peering Verification 241

Adding Tenant to Border Leafs 242

Tenant Configuration 242

Verification 244

Control-Plane: End-to-End Route Propagation 244

Data-Plane: Label Switching Path 249

Data-Plane: ICMP Request 251


Appendix A: Chapter 10 device configurations 253


Wednesday 15 July 2020

BGP EVPN Underlay Network with BGP (Multi-AS)


Introduction


The focus of this chapter is to explain the BGP Multi-AS Underlay Network design in BGP EVPN/VXLAN Fabric. It starts by explaining the BGP configuration because this way explanation can be done by using show and debug command as well as taking packet captures. The next section discusses of BGP adjacency process and its related states (Idle, Connect/Active, OpenSent, Open Confirm and Established). After that, this chapter explains the BGP routing discussing how connected routes are sent from RIB to Loc-RIB and from there to Adj-RIB-Out (Pre/Post). This section also introduces how NLRIs received within BGP Update eventually ends up into the RIB of receiving BGP speaker. In addition, this chapter shortly introduces the MRAI timer as well as a non-disruptive device maintenance solution. The last section tries to give an answer which protocol best fits in the Underlay Network of BGP EVPN fabric.



Infrastructure AS Numbering and IP Addressing Scheme


The AS-numbering scheme used in this chapter is the same as what was used in chapter 1 but instead of using unnumbered interfaces, each inter-switch interface now has an IP address assigned to it. It is possible to use the Unnumbered interface also with BGP using IPv6 Link-Local addressing [RFC 5549]. However, this solution is not supported by all vendors.


Figure 2-1: IP addressing Scheme.

Tuesday 7 July 2020

BGP EVPN Underlay Network with OSPF

Introduction


The foundation of a modern Datacenter fabric is an Underlay Network and it is crucial to understand the operation of the Control-Plane protocol solution used in it. The focus of this chapter is OSPF. The first section starts by introducing the network topology and AS numbering scheme used throughout this book. The second section explains how OSPF speakers connected to the same segment become fully adjacent. The third section discusses the process of how OSPF speakers exchange Link State information and build a Link-State Database (LSDB) which is used as an information source for calculating Shortest Path Tree (SPT) towards each destination using Dijkstra algorithm. The focus of the fourth section is an OSPF LSA flooding process. It strat by explaining how local OSPF speaker sends Link State Advertisements wrapped inside a Link-State Update message to its adjacent router and how receiving OSPF speakers a) installs information into LSDB, b) Acknowledge the packet, and c) floods it out of OSPF interfaces. The fifth section discusses of LSA and SPF timers. At the end of this chapter, there are OSPF related configurations from every device.

Infrastructure AS Numbering and IP Addressing Scheme


Figure 1-1 illustrates an AS numbering and an IP address scheme used throughout this book. All Leaf switches have dedicated BGP Private AS number while spine switches in the same cluster share the same AS number. Inter-Switch links use Unnumbered IP addressing using (interface Loopback 0) which is also used as OSPF Router-Id. Loopback 0 is not advertised by any device. OSPF type for Inter-Switch link is point-to-point so there is no DR/BDR election process. Leaf switches also have interface Loopback 30 that is used as a VTEP (VXLAN Tunnel End Point) address. Loopback 30 IP addresses are advertised by Leaf switches. All Loopback interfaces are in OSPF passive interface mode. At this stage, all switches belong to OSPF Area 0.0.0.0.


Figure 1-1: AS Numbering and IP Addressing Scheme.

Wednesday 25 March 2020

Comparing Internet Connection used in AWS and LISP Based Networks


Forewords

This post starts by discussing the Internet connection from the AWS VPC Control Plane operation perspective. The public AWS documentation only describes the basic components, such as an  Internet Gateway (IGW) and a subnet specific Implicit Routers. However, the public AWS documentation does not describe the Control Plane operation related to distributing the default route from IGWs to IMRs. The AWS VPC Control Plane part in this post is based on my assumptions, so be critical of what you read. The second part of this post shortly explains the Control-Plane operation of the Internet connection used in LISP based network. By comparing the AWS VPC to LISP based network I just want to point out that even though some might think that cloud-based networking is much simple than traditional on-premise networking, it is not. People tend to trust network solutions used in clouds (AWS, Azure, etc.) and there is no debate about (a) what hardware is used, (b) how the redundancy works, (c),  are solutions standard-based and so on. Now it is more like, I do not care how it works as long as it works. Good or bad, I do not know.

Thursday 12 March 2020

Intra-Subnet Communication: AWS VPC versus LISP Based Campus Fabric


Forewords


This article introduces the principles of the Amazon Web Service Virtual Private Cloud (AWS VPC) Control-Plane operation and Data-Plane encapsulation. Also, this document explains how the same kind of forwarding model can be achieved using standard protocols. Amazon has not published details of its VPC networking solution, and this document relies on publically available information and the author’s studies. The motivation for writing this document was that I wanted to point out that no matter how simple and easy to manage Cloud Networking looks and feels like, those still are as complex as any other large scale networks.

Example Environment


Figure 1-1 illustrates an example AWS VPC environment running on an imaginary application on two Elastic Cloud Computing (EC2) Instances, EC2-A and EC2-B. The instance EC2-A will be launched in physical server Host-A while the instance EC2-B will later be launched in physical server Host-B. The VPC vpc-1a2b3c4d is created in Stockholm (eu-north-1) Region in Availability Zone (AZ) eu-north-1c. The subnet 172.16.31.0/20 can be used in AZ eu-north-1c. The subnet for instances is 172.31.10.0/24. Elastic Network Interface-1 (ENI1) with IP address 172.31.10.10 will be attached to the instance EC2-A and ENI2 with IP address 172.31.10.20 will be attached to the instance EC2-B. For simplicity, the same Security Group (SG) “sg-nwktimes”, allowing all data traffic between EC2-A and EC2-B) is attached to both instances.

Inside both physical servers, there is a software router, Router-1 in Host-A and Router-2 in Host-B. Servers use offload NICs for connection to AZ Underlay Network and data traffic from instances is sent out of the server straight to offload NIC bypassing the hypervisor. The AZ Backbone includes three routers, Router-3, Router-4, and Router-5. Also, there is a Mapping Service that represents the centralized Control Plane. It holds an  Instance-to-Location Mapping Database that has information about every EC2 Instances running on a given VPC. Routers, servers and Mapping Service use IPv6 addressing.

Figure 1-1: Overall example topology and IP addressing scheme.

Monday 2 March 2020

Similarities Between AWS VPC and Cisco SDA – Intra-Subnet Communication


Update March 6, 2020: This post will be obsolete soon by a new  version


Forewords


This article explains the similarities between a LISP/VXLAN based Campus Fabric and AWS Virtual Private Cloud (VPC) from the Intra-Subnet Control-Plane and Data-Plane operation perspective. The AWS VPC solution details are not publicly available and the information included in this article is based on the author's own study using publically available AWS VPC documentation. 

There are two main reasons for writing this document: 

First, Cisco SDA is an on-prem LAN model while the AWS VPC is an off-prem DC solution. I wanted to point out that these two solutions, even though used for very different purposes, use the same kind of Control-Plane operation and Data-Plane encapsulation and are managed via QUI. This is kind of my answer to ever going discussion about is there DC-networks, Campus-networks and so on, or is there just networks.

Second, my own curiosity to understand the operation of AWS VPC.

I usually start by introducing the example environment and then explaining the configuration, moving to Control-Plane operation and then to Data-Plane operation. However, this time I take a different approach. This article first introduces the example environment but then the Data-Plane operation is discussed before Control-Plane operation. This way it is easier to understand what information is needed and how that information is gathered.

Thursday 30 January 2020

LISP Control-Plane in Campus Fabric: Table of Contents

This is the table of contents of my book "LISP Control-Plane in Campus Fabric". The book is available at https://leanpub.com/lispcontrol-planeincampusfabric
The book is now complete. It soon will be available also in Amazon.


Sunday 12 January 2020

VXLAN Book Errata 12-January 2020


Editions have done on 12 January 2020: These updates are made in both pdf-book (available at Leanpub.com) and the paperback version (available at Amazon.com).