Tuesday, 6 January 2026

UET Congestion Management: CCC Base RTT

Calculating Base RTT

[Edit: January 7 2026, RTT role in CWND adjustment process]

As described in the previous section, the Bandwidth-Delay Product (BDP) is a baseline value used when setting the maximum size (MaxWnd) of the Congestion Window (CWND). The BDP is calculated by multiplying the lowest link speed among the source and destination nodes by the Base Round-Trip Time (Base_RTT).

In addition to its role in BDP calculation, Base_RTT plays a key role in the CWND adjustment process. During operation, the RTT measured for each packet is compared against the Base_RTT. If the measured RTT is significantly higher than the Base_RTT, the CWND is reduced. If the RTT is close to or lower than the Base_RTT, the CWND is allowed to increase.

This adjustment process is described in more detail in the upcoming sections.

The config_base_rtt parameter represents the RTT of the longest path between sender and receiver when no other packets are in flight. In other words, it reflects the minimum RTT under uncongested conditions. Figure 6-7 illustrates the individual delay components that together form the RTT.

Serialization Delay: The network shown in Figure 6-7 supports jumbo frames with an MTU of 9216 bytes. Serialization delay is measured in time per bit, so the frame size must first be converted from bytes to bits:

9216 bytes × 8 = 73,728 bits

Serialization delay is then calculated by dividing the frame size in bits by the link speed. For a 100 Gbps link:

73,728 bits / 100 Gbps = 0.737 µs

Note: In a cut-through switched network, which is standard in modern 100 Gbps and above data center fabrics, the switch does not wait for the full 9216-byte frame to arrive before forwarding it. Instead, it processes only the packet header (typically the first 64–128 bytes) to determine the destination MAC or IP address and immediately begins transmitting the packet on the egress port. While the tail of the packet is still arriving on the ingress port, the head is already leaving the switch.

This behavior creates a pipeline effect, where bits flow through the network similarly to water through a pipe. As a result, when calculating end-to-end latency from a first-bit-in to last-bit-out perspective, the serialization delay is effectively incurred only once—the time required to place the packet onto the first link.

Propagation delay: The time it takes for light to travel through the cabling infrastructure. In our example, the combined fiber-optic length between Rank 0 on Node 1A and GPU 7 on Node A2 is 50 meters. Light travels through fiber at approximately 5 ns per meter, resulting in a propagation delay of:

50 m × 5 ns/m = 250 ns = 0.250 µs

Switching Delay (Cut-Through): The time a packet spends inside a network switch while being processed before it is forwarded. This latency arises from internal operations such as examining the packet header, performing a Forwarding Information Base (FIB) lookup to determine the correct egress port, and updating internal buffers and queues.

In modern cut-through switches, much of this processing occurs while the packet is still being received, so the added delay per switch is very small. High-end 400G switches exhibit cut-through latencies on the order of 350–500 ns per switch. For a path traversing three switches, the total switching delay sums to approximately:

3 × 400 ns ≈ 1.2 µs

Thus, even with multiple hops, switching delay contributes only a modest portion to the total Base RTT in 100 Gbps and above data center fabrics.

Forward Error Correction(FEC) Delay: Forward Error Correction (FEC) ensures reliable, “lossless” data transfer in high-speed AI fabrics. It is required because high-speed optical links can experience bit errors due to signal distortion, fiber imperfections, or high-frequency signaling noise.

FEC operates using data blocks and symbols. The outgoing data is divided into fixed-size blocks, each consisting of data symbols. In 100G and 400G Ethernet FEC, one symbol = 10 bits. For example, a 514-symbol data block contains 514 × 10 = 5,140 bits of actual data.

To detect and correct errors, the switch or NIC ASIC computes parity symbols from the data block using Reed-Solomon (RS) math and appends them to the block. The combination of the original data and the parity symbols forms a codeword. For example, in RS(544, 514), the codeword has 544 symbols in total, of which 514 are data symbols and 30 are parity symbols. Each symbol is 10 bits, so the 30 parity symbols add 300 extra bits to the codeword.

At the receiver, the codeword is checked: the parity symbols are used to detect and correct any corrupted symbols in the original data block. Because RS-FEC operates on symbols rather than individual bits, if multiple bits within a single 10-bit symbol are corrupted, the entire symbol is corrected as a single unit.

The FEC latency (or accumulation delay) comes from the requirement to receive the entire codeword before error correction can begin. For a 400G RS(544, 514) codeword:

544 symbols × 10 bits/symbol = 5,440 bits total

At 400 Gbps, this adds a fixed delay of ~150 ns per hop

This delay is a “fixed cost” of high-speed networking and must be included in the Base RTT calculation for AI fabrics. The sum of all delays gives the one-way delay, and the round-trip time (RTT) is obtained by multiplying this value by two. The config_base_rtt value in figure 6-7 is the RTT rounded to a safe, reasonable integer. 

Figure 6-7: Calculating Base_RTT Value.

Saturday, 3 January 2026

UET Congestion Management: Congestion Control Context

Congestion Control Context

Updated 5.1.2026: Added CWND computation example into figure. Added CWND cmputaiton into text.

Ultra Ethernet Transport (UET) uses a vendor-neutral, sender-specific congestion window–based congestion control mechanism together with flow-based, adjustable entropy-value (EV) load balancing to manage incast, outcast, local, link, and network congestion events. Congestion control in UET is implemented through coordinated sender-side and receiver-side functions to enforce end-to-end congestion control behavior.

On the sender side, UET relies on the Network-Signaled Congestion Control (NSCC) algorithm. Its main purpose is to regulate how quickly packets are transmitted by a Packet Delivery Context (PDC). The sender adapts its transmission window based on round-trip time (RTT) measurements and Explicit Congestion Notification (ECN) Congestion Experienced (CE) feedback conveyed through acknowledgments from the receiver.

On the receiver side, Receiver Credit-based Congestion Control (RCCC) limits incast pressure by issuing credits to senders. These credits define how much data a sender is permitted to transmit toward the receiver. The receiver also observes ECN-CE markings in incoming packets to detect path congestion. When congestion is detected, the receiver can instruct the sender to change the entropy value, allowing traffic to be steered away from congested paths.

Both sender-side and receiver-side mechanisms ultimately control congestion by limiting the amount of in-flight data, meaning data that has been sent but not yet acknowledged. In UET, this coordination is handled through a Congestion Control Context (CCC). The CCC maintains the congestion control state and determines the effective transmission window, thereby bounding the number of outstanding packets in the network. A single CCC may be associated with one or more PDCs communicating between the same Fabric Endpoint (FEP) within the same traffic class.


Initializing Congestion Control Context (CCC)

When the PDS Manager receives an RMA operation request from the SES layer, it first checks whether a suitable Packet Delivery Context (PDC) already exists for the JobID, destination FEP, traffic class, and delivery mode carried in the request. If no matching PDC is found, the PDS Manager allocates a new one.

For the first PDC associated with a particular destination, a Congestion Control Context (CCC) is required to manage end-to-end congestion for that flow. The PDS Manager requests a CCC from the CCC Manager within the Congestion Management Sublayer (CMS). The CCC Manager creates the CCC, which initially enters the IDLE state, containing only the basic data structures without an active configuration. After creation, the CCC is bound to the PDC.

Next, the CCC is assigned a congestion window (CWND), which is computed based on CCC configuration parameters. The first step is to compute the Bandwidth-Delay Product (BDP), which is used to derive the upper bound for the initial congestion window. The CWND limits the total number of bytes in flight across all paths between the sender and the receiver.

The BDP is computed as:

BDP = min(sender_link_speed, receiver_link_speed) × config_base_rtt

The link speed must be expressed in bytes per second, not bits per second, because BDP is measured in bytes. The min() operator selects the smaller of the sender and receiver link speeds. In an AI fabric, these values are typically identical. The sender link speed, receiver link speed, and config_base_rtt are pre-assigned configuration parameters.

UET typically allows a maximum in-flight volume of 1.5 × BDP to provide throughput headroom while minimizing excessive queuing. A factor of 1.0 represents the minimum required to “fill the pipe” and would set the BDP directly as the maximum congestion window (MaxWnd). However, the UET specification applies a factor of 1.5 to allow controlled oversubscription and improved utilization.

Once the CWND is assigned and the CCC is bound to the PDC, the CCC transitions from the IDLE state to the ACTIVE state. In the ACTIVE state, the CCC holds all configuration information and is associated with the PDC, but data transport has not yet started.

When the CCC is fully configured and ready for operation, it transitions to the READY state. This transition signals that the CCC can enforce congestion control policies and monitor traffic. At this point, the PDC is allowed to begin sending data, and the CCC tracks and regulates the flow according to the configured congestion control algorithms.

The CCC serves as the central control structure for congestion management, hosting either sender-side (NSCC) or receiver-side (RCCC) algorithms. A CCC is unidirectional and is instantiated independently on both the sender and the receiver, where it is locally associated with the corresponding PDC. Once in the READY state, the CCC maintains the state required to regulate data flow, enabling NSCC and RCCC to enforce congestion windows, credits, and path usage to prevent network congestion and maintain efficient data transport.

Note: In this model, the PDS Manager acts as the control-plane authority responsible for context management and coordination, while the Packet Delivery Context (PDC) performs data-plane execution under the control of the Congestion Control Context (CCC). Once the CCC is operational and the PDC is authorized for data transport, RMA data transfers proceed directly over the PDC without further involvement from the PDS Manager.



Figure 6-6: Congestion Context: Initialization.