Tuesday, 13 January 2026

Ultra Ethernet: Congestion Control Context

 Ultra Ethernet Transport (UET) uses a vendor-neutral, sender-specific congestion window–based congestion control mechanism together with flow-based, adjustable entropy-value (EV) load balancing to manage incast, outcast, local, link, and network congestion events. Congestion control in UET is implemented through coordinated sender-side and receiver-side functions to enforce end-to-end congestion control behavior.

On the sender side, UET relies on the Network-Signaled Congestion Control (NSCC) algorithm. Its main purpose is to regulate how quickly packets are transmitted by a Packet Delivery Context (PDC). The sender adapts its transmission window based on round-trip time (RTT) measurements and Explicit Congestion Notification (ECN) Congestion Experienced (CE) feedback conveyed through acknowledgments from the receiver.

On the receiver side, Receiver Credit-based Congestion Control (RCCC) limits incast pressure by issuing credits to senders. These credits define how much data a sender is permitted to transmit toward the receiver. The receiver also observes ECN-CE markings in incoming packets to detect path congestion. When congestion is detected, the receiver can instruct the sender to change the entropy value, allowing traffic to be steered away from congested paths.

Both sender-side and receiver-side mechanisms ultimately control congestion by limiting the amount of in-flight data, meaning data that has been sent but not yet acknowledged. In UET, this coordination is handled through a Congestion Control Context (CCC). The CCC maintains the congestion control state and determines the effective transmission window, thereby bounding the number of outstanding packets in the network. A single CCC may be associated with one or more PDCs communicating between the same Fabric Endpoint (FEP) within the same traffic class.


Initializing Congestion Control Context (CCC)

When the PDS Manager receives an RMA operation request from the SES layer, it first checks whether a suitable Packet Delivery Context (PDC) already exists for the JobID, destination FEP, traffic class, and delivery mode. If no matching PDC is found, the PDS Manager allocates a new one.

For the first PDC associated with a specific FEP-to-FEP flow, a Congestion Control Context (CCC) is required to manage end-to-end congestion. The PDS Manager requests this context from the CCC Manager within the Congestion Management Sublayer (CMS). Upon instantiation, the CCC initially enters the IDLE state, containing basic data structures without an active configuration.

The CCC Manager then initializes the context by calculating values and thresholds, such as the Initial Congestion Window (Initial CWND) and Maximum CWND (MaxWnd), using pre-defined configuration parameters. Once these initial source states for the NSCC are set, the CCC is bound to the corresponding PDC.

When fully configured, the CCC transitions to the READY state. This transition signals that the CCC is authorized to enforce congestion control policies and monitor traffic. The CCC serves as the central control structure for congestion management, hosting either sender-side (NSCC) or receiver-side (RCCC) algorithms. Because a CCC is unidirectional, it is instantiated independently on both the sender and the receiver.

Once in the READY state, the PDC is permitted to begin data transmission. The CCC maintains the active state required to regulate flow, enabling the NSCC and RCCC to enforce windows, credits, and path usage to prevent network congestion and optimize transport efficiency.

Note: In this model, the PDS Manager acts as the control-plane authority responsible for context management and coordination, while the PDC handles data-plane execution under the guidance of the CCC. Once the CCC is operational, RMA data transfers proceed directly via the PDC without further involvement from the PDS Manager.



Figure 6-6: Congestion Context: Initialization.

Calculating Initial CWND


Following the initialization of the Congestion Control Context (CCC) for a Packet Delivery Context (PDC), specific configuration parameters are used to establish the Initial Congestion Window (CWND) and the Maximum Congestion Window (MaxWnd). 

The Congestion Window (CWND) defines the maximum number of "in-flight" bytes, data that has been transmitted but not yet acknowledged by the receiver. Effectively, the CWND regulates the volume of data allowed on the wire for a specific flow at any given time to prevent network saturation.

The primary element for computing the CWND is the Bandwidth-Delay Product (BDP). To determine the path-specific BDP, the algorithm selects the slowest link speed and multiplies it by the configured base Round-Trip Time (config_base_rtt):

BDP = min(sender.linkspeed, receiver.linkspeed) x config_base_rtt

The config_base_rtt represents the latency over the longest physical path under zero-load conditions. This value is a static constant derived from the cumulative sum of:
  • Serialization delays (time to put bits on the wire)
  • Propagation delays (speed of light through fiber)
  • Switching delays (internal switch traversal)
  • FEC (Forward Error Correction) delays

Setting MaxWnd


The MaxWnd serves as a definitive upper limit for the CWND that cannot be exceeded under any circumstances. It is typically derived by multiplying the calculated BDP by a factor of 1.5.While a CWND equal to 1.0 x BDP is theoretically sufficient to saturate a link, real-world variables, such as transient bursts, scheduling jitter, or variations in switch processing, can cause the link to go idle if the window is too restrictive. UET allows the CWND to grow up to 1.5 x BDP to maintain high utilization and accommodate acknowledgment (ACK) clocking dynamics.

Example Calculation: Consider a flow where the slowest link speed is 100 Gbps and the config_base_rtt is 6.0 µs.

Calculate BDP (Bits): 100 x 109 bps x 0.000006 s = 600,000 bits
Calculate BDP (Bytes): 600,000 / 8 = 75,000 bytes
Calculate MaxWnd: 75,000 x 1.5 = 112,500  bytes

Note on Incast Prevention: While the "ideal" initial CWND is 1.0 x BDP, UET allows the starting window to be configured to a significantly smaller value (e.g., 10–32 KB or a few MTUs). This configuration prevents Incast congestion, a phenomenon where the aggregate traffic from multiple ingress ports exceeds the physical capacity of an egress port. By starting with a conservative CWND, the system ensures that the switch's egress buffers are not exhausted during the first RTT, providing the NSCC algorithm sufficient time to measure RTT inflation and modulate the flow rates.

A common misconception is that the BDP limits the transmission rate. In reality, the BDP defines the volume of data required to keep the "pipe" full. While the Initial CWND may be only 75,000 bytes, it is replenished every RTT. At a 6.0 µs RTT, this volume translates to a full 100 Gbps line rate:

600,000 bits / 6.0 µs = 600,000 / 0.000006 = 100 × 109 bps = 100 Gbps

Therefore, a window of 1.0 x BDP achieves 100% utilization. The 1.5 x BDP (MaxWnd) simply provides the necessary headroom to prevent the link from going idle during minor acknowledgment delays.

Figure 6-7: CC Config Parameters, Initial CWND and MaxWnd.

Calculating New CWND


When the network is uncongested, indicated by a measured RTT remaining near the base_rtt, the NSCC algorithm performs an Additive Increase (AI) to grow the CWND. To ensure fairness across the entire fabric, the algorithm utilizes a universal Base_BDP parameter rather than the path-specific BDP.

The Base_BDP is a fixed protocol constant (typically 150,000 bytes, derived from a reference 100 Gbps link at 12 µs). The new CWND is calculated by adding a fraction of this constant to the current window:

CWND(new) = CWND(Init) + Base_BDP\Scaling Factor

Using a universal constant ensures Scale-Invariance in a mixed-speed fabric (e.g., 100G and 400G NICs). 

If a 400G NIC were to use its own BDP (300,000 bytes) for the increase step, its window would grow four times faster than that of a 100G NIC. By using the shared Base_BDP (150,000 bytes), both NICs increase their throughput by the same number of bytes per second. This "normalized acceleration" prevents faster NICs from starving slower flows during the capacity-seeking phase.

As illustrated in Figure 6-8, consider a flow with an Initial CWND of 75,000 bytes, a Base_BDP of 150,000 bytes, and a Scaling Factor of 1024:

Step Size = 150,000 / 1024 ≈ 146.5  bytes
New CWND = 75,000 + 146.5 = 75,146.5 bytes

Note: Scaling factors are ideally set to powers of 2 (e.g., 512, 1024, 2048, 4096, 8192) to allow the hardware to use fast bit-shifting operations instead of expensive division. 

Higher factors (e.g., 8192): Result in smaller, smoother increments (high stability). 
Lower factors (e.g., 512): Result in larger increments (faster convergence to link rate).

Figure 6-8: Increasing CWND.




Tuesday, 6 January 2026

UET Congestion Management: CCC Base RTT

Calculating Base RTT

[Edit: January 7 2026, RTT role in CWND adjustment process]

As described in the previous section, the Bandwidth-Delay Product (BDP) is a baseline value used when setting the maximum size (MaxWnd) of the Congestion Window (CWND). The BDP is calculated by multiplying the lowest link speed among the source and destination nodes by the Base Round-Trip Time (Base_RTT).

In addition to its role in BDP calculation, Base_RTT plays a key role in the CWND adjustment process. During operation, the RTT measured for each packet is compared against the Base_RTT. If the measured RTT is significantly higher than the Base_RTT, the CWND is reduced. If the RTT is close to or lower than the Base_RTT, the CWND is allowed to increase.

This adjustment process is described in more detail in the upcoming sections.

The config_base_rtt parameter represents the RTT of the longest path between sender and receiver when no other packets are in flight. In other words, it reflects the minimum RTT under uncongested conditions. Figure 6-7 illustrates the individual delay components that together form the RTT.

Serialization Delay: The network shown in Figure 6-7 supports jumbo frames with an MTU of 9216 bytes. Serialization delay is measured in time per bit, so the frame size must first be converted from bytes to bits:

9216 bytes × 8 = 73,728 bits

Serialization delay is then calculated by dividing the frame size in bits by the link speed. For a 100 Gbps link:

73,728 bits / 100 Gbps = 0.737 µs

Note: In a cut-through switched network, which is standard in modern 100 Gbps and above data center fabrics, the switch does not wait for the full 9216-byte frame to arrive before forwarding it. Instead, it processes only the packet header (typically the first 64–128 bytes) to determine the destination MAC or IP address and immediately begins transmitting the packet on the egress port. While the tail of the packet is still arriving on the ingress port, the head is already leaving the switch.

This behavior creates a pipeline effect, where bits flow through the network similarly to water through a pipe. As a result, when calculating end-to-end latency from a first-bit-in to last-bit-out perspective, the serialization delay is effectively incurred only once—the time required to place the packet onto the first link.

Propagation delay: The time it takes for light to travel through the cabling infrastructure. In our example, the combined fiber-optic length between Rank 0 on Node 1A and GPU 7 on Node A2 is 50 meters. Light travels through fiber at approximately 5 ns per meter, resulting in a propagation delay of:

50 m × 5 ns/m = 250 ns = 0.250 µs

Switching Delay (Cut-Through): The time a packet spends inside a network switch while being processed before it is forwarded. This latency arises from internal operations such as examining the packet header, performing a Forwarding Information Base (FIB) lookup to determine the correct egress port, and updating internal buffers and queues.

In modern cut-through switches, much of this processing occurs while the packet is still being received, so the added delay per switch is very small. High-end 400G switches exhibit cut-through latencies on the order of 350–500 ns per switch. For a path traversing three switches, the total switching delay sums to approximately:

3 × 400 ns ≈ 1.2 µs

Thus, even with multiple hops, switching delay contributes only a modest portion to the total Base RTT in 100 Gbps and above data center fabrics.

Forward Error Correction(FEC) Delay: Forward Error Correction (FEC) ensures reliable, “lossless” data transfer in high-speed AI fabrics. It is required because high-speed optical links can experience bit errors due to signal distortion, fiber imperfections, or high-frequency signaling noise.

FEC operates using data blocks and symbols. The outgoing data is divided into fixed-size blocks, each consisting of data symbols. In 100G and 400G Ethernet FEC, one symbol = 10 bits. For example, a 514-symbol data block contains 514 × 10 = 5,140 bits of actual data.

To detect and correct errors, the switch or NIC ASIC computes parity symbols from the data block using Reed-Solomon (RS) math and appends them to the block. The combination of the original data and the parity symbols forms a codeword. For example, in RS(544, 514), the codeword has 544 symbols in total, of which 514 are data symbols and 30 are parity symbols. Each symbol is 10 bits, so the 30 parity symbols add 300 extra bits to the codeword.

At the receiver, the codeword is checked: the parity symbols are used to detect and correct any corrupted symbols in the original data block. Because RS-FEC operates on symbols rather than individual bits, if multiple bits within a single 10-bit symbol are corrupted, the entire symbol is corrected as a single unit.

The FEC latency (or accumulation delay) comes from the requirement to receive the entire codeword before error correction can begin. For a 400G RS(544, 514) codeword:

544 symbols × 10 bits/symbol = 5,440 bits total

At 400 Gbps, this adds a fixed delay of ~150 ns per hop

This delay is a “fixed cost” of high-speed networking and must be included in the Base RTT calculation for AI fabrics. The sum of all delays gives the one-way delay, and the round-trip time (RTT) is obtained by multiplying this value by two. The config_base_rtt value in figure 6-7 is the RTT rounded to a safe, reasonable integer. 

Figure 6-7: Calculating Base_RTT Value.

Saturday, 3 January 2026

UET Congestion Management: Congestion Control Context

Congestion Control Context

Updated 5.1.2026: Added CWND computation example into figure. Added CWND cmputaiton into text.
Updated 13.1.2026: Deprectade by: Ultra Ethernet: Congestion Control Context 

Ultra Ethernet Transport (UET) uses a vendor-neutral, sender-specific congestion window–based congestion control mechanism together with flow-based, adjustable entropy-value (EV) load balancing to manage incast, outcast, local, link, and network congestion events. Congestion control in UET is implemented through coordinated sender-side and receiver-side functions to enforce end-to-end congestion control behavior.

On the sender side, UET relies on the Network-Signaled Congestion Control (NSCC) algorithm. Its main purpose is to regulate how quickly packets are transmitted by a Packet Delivery Context (PDC). The sender adapts its transmission window based on round-trip time (RTT) measurements and Explicit Congestion Notification (ECN) Congestion Experienced (CE) feedback conveyed through acknowledgments from the receiver.

On the receiver side, Receiver Credit-based Congestion Control (RCCC) limits incast pressure by issuing credits to senders. These credits define how much data a sender is permitted to transmit toward the receiver. The receiver also observes ECN-CE markings in incoming packets to detect path congestion. When congestion is detected, the receiver can instruct the sender to change the entropy value, allowing traffic to be steered away from congested paths.

Both sender-side and receiver-side mechanisms ultimately control congestion by limiting the amount of in-flight data, meaning data that has been sent but not yet acknowledged. In UET, this coordination is handled through a Congestion Control Context (CCC). The CCC maintains the congestion control state and determines the effective transmission window, thereby bounding the number of outstanding packets in the network. A single CCC may be associated with one or more PDCs communicating between the same Fabric Endpoint (FEP) within the same traffic class.


Initializing Congestion Control Context (CCC)

When the PDS Manager receives an RMA operation request from the SES layer, it first checks whether a suitable Packet Delivery Context (PDC) already exists for the JobID, destination FEP, traffic class, and delivery mode carried in the request. If no matching PDC is found, the PDS Manager allocates a new one.

For the first PDC associated with a particular destination, a Congestion Control Context (CCC) is required to manage end-to-end congestion for that flow. The PDS Manager requests a CCC from the CCC Manager within the Congestion Management Sublayer (CMS). The CCC Manager creates the CCC, which initially enters the IDLE state, containing only the basic data structures without an active configuration. After creation, the CCC is bound to the PDC.

Next, the CCC is assigned a congestion window (CWND), which is computed based on CCC configuration parameters. The first step is to compute the Bandwidth-Delay Product (BDP), which is used to derive the upper bound for the initial congestion window. The CWND limits the total number of bytes in flight across all paths between the sender and the receiver.

The BDP is computed as:

BDP = min(sender_link_speed, receiver_link_speed) × config_base_rtt

The link speed must be expressed in bytes per second, not bits per second, because BDP is measured in bytes. The min() operator selects the smaller of the sender and receiver link speeds. In an AI fabric, these values are typically identical. The sender link speed, receiver link speed, and config_base_rtt are pre-assigned configuration parameters.

UET typically allows a maximum in-flight volume of 1.5 × BDP to provide throughput headroom while minimizing excessive queuing. A factor of 1.0 represents the minimum required to “fill the pipe” and would set the BDP directly as the maximum congestion window (MaxWnd). However, the UET specification applies a factor of 1.5 to allow controlled oversubscription and improved utilization.

Once the CWND is assigned and the CCC is bound to the PDC, the CCC transitions from the IDLE state to the ACTIVE state. In the ACTIVE state, the CCC holds all configuration information and is associated with the PDC, but data transport has not yet started.

When the CCC is fully configured and ready for operation, it transitions to the READY state. This transition signals that the CCC can enforce congestion control policies and monitor traffic. At this point, the PDC is allowed to begin sending data, and the CCC tracks and regulates the flow according to the configured congestion control algorithms.

The CCC serves as the central control structure for congestion management, hosting either sender-side (NSCC) or receiver-side (RCCC) algorithms. A CCC is unidirectional and is instantiated independently on both the sender and the receiver, where it is locally associated with the corresponding PDC. Once in the READY state, the CCC maintains the state required to regulate data flow, enabling NSCC and RCCC to enforce congestion windows, credits, and path usage to prevent network congestion and maintain efficient data transport.

Note: In this model, the PDS Manager acts as the control-plane authority responsible for context management and coordination, while the Packet Delivery Context (PDC) performs data-plane execution under the control of the Congestion Control Context (CCC). Once the CCC is operational and the PDC is authorized for data transport, RMA data transfers proceed directly over the PDC without further involvement from the PDS Manager.



Figure 6-6: Congestion Context: Initialization.