Monday, 29 December 2025

UET Congestion Management: Introduction

 Introduction


Figure 6-1 depicts a simple scale-out backend network for an AI data center. The topology follows a modular design, allowing the network to scale out or scale in as needed. The smallest building block in this example is a segment, which consists of two nodes, two rail switches, and one spine switch. Each node in the segment is equipped with a dual-port UET NIC and two GPUs.

Within a segment, GPUs are connected to the leaf switches using a rail-based topology. For example, in Segment 1A, the communication path between GPU 0 on Node A1 and GPU 0 on Node A2 uses Rail A0 (Leaf 1A-1). Similarly, GPU 1 on both nodes is connected to Rail A1 (Leaf 1A-2). In this example, we assume that intra-node GPU collective communication takes place over an internal, high-bandwidth scale-up network (such as NVLink). As a result, intra-segment GPU traffic never reaches the spine layer. Communication between segments is carried over the spine layer.

The example network is a best-effort (that is, PFC is not enabled) two-tier, three-stage non-blocking fat-tree topology, where each leaf and spine switch has four 100-Gbps links. Leaf switches have two host-facing links and two inter-switch links, while spine switches have four inter-switch links. All inter-switch and host links are Layer-3 point-to-point interfaces, meaning that no Layer-2 VLANs are used in the example network.

Links between a node’s NIC and the leaf switches are Layer-3 point-to-point connections. The IP addressing scheme uses /31 subnets, where the first address is assigned to the host NIC and the second address to the leaf switch interface. These subnets are allocated in a contiguous manner so they can be advertised as a single BGP aggregate route toward the spine layer.

The trade-off of this aggregation model is that host-link or NIC failures cannot rely solely on BGP route withdrawal for fast failure detection. Additional local failure-detection mechanisms are therefore required at the leaf switch.

Although not shown in Figure 6-1, the example design supports a scalable multi-pod architecture. Multiple pods can be interconnected through a super-spine layer, enabling large-scale backend networks.

Note: The OSI between GPUs within a node indicates that both GPUs belong to the same Operating System Instance (OSI). The link between GPUs, in turn, is part of a high-bandwidth domain (the scale-up backend).

Figure 6-1: Example of AI DC Backend Networks Topology.

Congestion Types

In this text, we categorize congestion into two distinct domains: congestion within nodes, which includes incast, local, and outcast congestion, and congestion in scale-out backend networks, which includes link and network congestion. The following sections describe each congestion type in detail.


Incast Congestion

In high-performance networking, Incast is a specific type of congestion that occurs when a many-to-one communication pattern overwhelms a single network point. This is fundamentally a "fan-in" problem, where the traffic volume destined for a single receiver exceeds both the physical line rate of the last-hop switch's egress interface and the storage capacity of its output buffers.

To visualize this, consider the configuration in Figure 6-2. The setup consists of four UET Nodes (A1, A2, B1, and B2), each containing two GPUs. This results in eight total processing units, labeled Rank 0 through Rank 7. Each Rank is equipped with its own dedicated 100G NIC.

The bottleneck forms when multiple sources target a single destination simultaneously. In this scenario, Ranks 1 through 7 all begin transmitting data to Rank 0 at the exact same time, each at a 100G line rate.

The backbone of the network is typically robust enough to handle this aggregate traffic. If the switches are connected via 400G or 800G links, the core of the network stays clear and fast. If the core were to experience congestion, Network Signaled Congestion Control (NSCC) could be enabled to manage it. However, the specific problem here occurs at Leaf 1A-1, the switch where the target (Rank 0) is connected. While the switch receives a combined 600G of data destined for Rank 0, the outgoing interface from the switch to Rank 0 can only move 100G. Note, that Rank 1 use high-speed NVLink, not its Ethernet NIC

A buffer overflow is inevitable when 700G of data arrives at an egress port that can only output 100G. The switch is forced to store the extra 600G of data per second in its internal memory (buffers). Because network buffers are quite small and high-speed data moves incredibly fast, these buffers fill up in microseconds.

Once the buffers are full, the switch has no choice but to drop any new incoming packets. This leads to massive retransmission delays and "stuttering" in application performance. This is particularly devastating for AI training workloads, where all Ranks must stay synchronized to maintain efficiency.

While traditional networks use simple buffer management to deal with this, Ultra Ethernet utilizes a more sophisticated approach. To prevent "fan-in" from ever overwhelming the switch buffers in the first place, UET employs Receiver Credit-based Congestion Control (RCCC). This mechanism ensures the receiver remains in control by distributing credits that define exactly how much data each active source is allowed to transmit at any given time.


Figure 6-2: Intra-node Congestion - Incast Congestions.


Local Congestion

Local congestion arises when the High-Bandwidth Memory (HBM) controller, which manages access to the GPU’s memory channels, becomes a bottleneck. The HBM controller arbitrates all read and write requests to GPU memory, regardless of their source. These requests may originate from the GPU’s compute cores, from a peer GPU via NVLink, or from a network interface card (NIC) performing remote memory access (RMA) operations.

With a UET_WRITE operation, the target GPU compute cores are bypassed: the NIC writes data directly into GPU memory using DMA. The GPU does not participate in the data transfer itself, and the NIC handles packet reception and memory writes. Even in this case, however, the data must still pass through the HBM controller, which serves as the shared gateway to the GPU’s memory system.

In Figure 6-3, the HBM controller of Rank 0 receives seven concurrent memory access requests: six inter-node RMA write requests and one intra-node request. The controller must arbitrate among these requests, determining the order and timing of each access. If the aggregate demand exceeds the available memory bandwidth or arbitration capacity, some requests are delayed. These memory-access delays are referred to as local congestion.



Figure 6-3: Intra-node Congestion - Local Congestions.


Outcast Congestion

Outcast congestion is the third type of congestion observed in collective operations. It occurs when multiple packet streams share the same egress port, and some flows are temporarily delayed relative to others. Unlike incast congestion, which arises from simultaneous arrivals at a receiver, outcast happens when certain flows dominate the output resources, causing other flows to experience unfair delays or buffer pressure.

Consider the broadcast phase of the AllReduce operation. After Rank 0 has aggregated the gradients from all participating ranks, it sends the averaged results back to all other ranks. Suppose Rank 0 sends these updates simultaneously to ranks on node A2 and node A3 over the same egress queue of its NIC. If one destination flow slightly exceeds the others in packet rate, the remaining flows experience longer queuing delays or may even be dropped if the egress buffer becomes full. These delayed flows are “outcast” relative to the dominant flows.

In this scenario, the NIC at Rank 0 must perform multiple UET_WRITE operations in parallel, generating high egress traffic toward several remote FEPs. At the same time, the HBM controller on Rank 0 may become a bottleneck because the data must be read from memory to feed the NIC. Thus, local congestion can occur concurrently with outcast congestion, especially during large-scale AllReduce broadcasts where multiple high-bandwidth streams are active simultaneously.

Outcast congestion illustrates that even when the network’s total capacity is sufficient, uneven traffic patterns can cause some flows to be temporarily delayed or throttled. Mitigating outcast congestion is addressed by appropriate egress scheduling and flow-control mechanisms to ensure fair access to shared resources and predictable collective operation performance. These mechanisms are explained in the upcoming Network-Signaled Congestion Control (NSCC) and Receiver Credit-Based Congestion Control (RCCC) chapters.


Figure 6-4: Intra-node Congestion - Outcast Congestions.


Link Congestion


Traffic in distributed neural network training workloads is dominated by bursty, long-lived elephant flows. These flows are tightly coupled to the application’s compute–communication phases. During the forward pass, network traffic is minimal, whereas during the backward pass, each GPU transmits large gradient updates at or near line rate. Because weight updates can only be computed after gradient synchronization across all workers has completed, even a single congested link can delay the entire training step.

In a routed, best-effort fat-tree Clos fabric, link congestion may be caused by Equal-Cost Multi-Path (ECMP) collisions. ECMP typically uses a five-tuple hash—comprising the source and destination IP addresses, transport protocol, and source and destination ports—to select an outgoing path for each flow. During the backward pass, a single rank often synchronizes multiple gradient chunks with several remote ranks simultaneously, forming a point-to-multipoint traffic pattern.

For example, suppose Ranks 0–3 in segment 1 initiate gradient synchronization with Ranks 4–7 in segment 2 at the same time. Ranks 0 and 2 are connected to rail 0 through Leaf 1A-1, while Ranks 1 and 3 are connected to rail 1 through Leaf 1A-2. As shown in Figure 6-5, the ECMP hash on Leaf 1A-1 selects the same uplink toward Spine 1A for both flows arriving via rail 0, while the ECMP hash on Leaf 1A-2 distributes its flows evenly across the available spine links.

As a result, two 100-Gbps flows are mapped onto a single 100-Gbps uplink on Leaf 1A-1. The combined traffic exceeds the egress link capacity, causing buffer buildup and eventual buffer overflow on the uplink toward Spine 1A. This condition constitutes link congestion, even though alternative equal-cost paths exist in the topology.

In large-scale AI fabrics, thousands of concurrent flows may be present, and low entropy in traffic patterns—such as many flows sharing similar IP address ranges and port numbers—further increases the likelihood of ECMP collisions. Consequently, link utilization may become uneven, leading to transient congestion and performance degradation even in a nominally non-blocking network.

Ultra Ethernet Transport includes signaling mechanisms that allow endpoints to react to persistent link congestion, including influencing path selection in ECMP-based fabrics. These mechanisms are discussed in later chapters.

Note: Although outcast congestion is fundamentally caused by the same condition—attempting to transmit more data than an egress interface can sustain—Ultra Ethernet Transport distinguishes between host-based and switch-based egress congestion events and applies different signaling and control mechanisms to each. These mechanisms are described in the following congestion control chapters.



Figure 6-5: Link Congestions.

Network Congestion


Common causes of network congestion include too high oversubscription ration, ECMP collisions, and link or device failures. A less obvious but important source of short-term congestion is Priority Flow Control (PFC), which is commonly used to build lossless Ethernet networks. PFC together with Explicit Congestion Notification (ECN) forms the foundation of Lossless Ethernet for RoCEv2 but should be avoided in UET enabled best-effort network. The upcoming chapters explains why.

PFC relies on two buffer thresholds to control traffic flow: xOFF and xON. The xOFF threshold defines the point at which a switch generates a pause frame when a priority queue becomes congested. A pause frame is an Ethernet MAC control frame that tells the upstream device which Traffic Class (TC) queue is congested and for how long packet transmission for that TC should be paused. Packets belonging to other traffic classes can still be forwarded normally. Once the buffer occupancy drops below the xON threshold, the switch sends a resume signal, allowing traffic for that priority queue to continue before the actual pause timer expires.

At first sight, PFC appears to affect only a single link and only a specific traffic class. In practice, however, a PFC pause can trigger a chain reaction across the network. For example, if the egress buffer size exceeds the xOFF threshold for TC-Low on interface to rank 7 on Leaf switch 1B-1, the switch sends PFC pause frames to both connected spine switches, instructing them to temporarily hold TC-Low packets in their buffers. As the egress buffers for TC-Low on the spine switches begin to fill and xOFF threshold is crossed, they in turn sends PFC pause frame to rest of the leaf switches.

This behavior can quickly spread congestion beyond the original point of contention. In the worst case, multiple switches and links may experience temporary pauses. Once buffer occupancy drops below the xON threshold, Leaf switch 1B-1 sends resume signals, and traffic gradually recovers as normal transmission resumes. Even though the congestion episode is short, it disrupts collective operations and negatively impacts distributed training performance.

The upcoming chapters explain how Ultra Ethernet Network-Signal Congestion Control (NSCC) and Receiver-Credit Congestion Control (RCCC) manage the amount of data that sources are allowed to send over the network, maximizing network utilization while avoiding congestion. The next chapters also describe how Explicit Congestion Notification (ECN), Packet Trimming, and Entropy Value-based Packet Spraying, when combined with NSCC and RCCC, contribute to a self-adjusting, reliable backend network.


2 comments:

  1. From a VSAIR perspective, this analysis reinforces a core principle: congestion is not a capacity failure, it is a constraint-enforcement failure. Every scenario described—incast, local HBM contention, outcast unfairness, ECMP collisions, and PFC cascades—occurs in systems that are deterministic, standards-compliant, and correctly engineered at the component level. What breaks is continuity across layers. When multiple lawful flows converge without scoped isolation, attribution, or bounded arbitration, the system remains correct yet becomes operationally unstable. VSAIR treats these events as compatibility collapses: rules are obeyed, but governance is absent. The implication is clear for AI-scale infrastructure: performance, safety, and predictability require explicit constraint governance (scope, fairness, failure domains, and fail-closed behavior), not just faster links or smarter hashing. Stability is enforced—not emergent.

    ReplyDelete
    Replies
    1. The beauty of Ultra Ethernet is that it is designed for routed best-effort networks with sufficient bandwidth, requiring no complex congestion control beyond DSCP-based classification/queuing and ECN marking. Support for packet trimming is a solid bonus. UE endpoints—both sender and receiver—can independently control the amount of data on the wire, adjust entropy values for fair load balancing, and coordinate with each other using signaling options carried in data, control, or acknowledgment packets.

      Delete