Tuesday, 13 January 2026

Ultra Ethernet: Congestion Control Context

 Ultra Ethernet Transport (UET) uses a vendor-neutral, sender-specific congestion window–based congestion control mechanism together with flow-based, adjustable entropy-value (EV) load balancing to manage incast, outcast, local, link, and network congestion events. Congestion control in UET is implemented through coordinated sender-side and receiver-side functions to enforce end-to-end congestion control behavior.

On the sender side, UET relies on the Network-Signaled Congestion Control (NSCC) algorithm. Its main purpose is to regulate how quickly packets are transmitted by a Packet Delivery Context (PDC). The sender adapts its transmission window based on round-trip time (RTT) measurements and Explicit Congestion Notification (ECN) Congestion Experienced (CE) feedback conveyed through acknowledgments from the receiver.

On the receiver side, Receiver Credit-based Congestion Control (RCCC) limits incast pressure by issuing credits to senders. These credits define how much data a sender is permitted to transmit toward the receiver. The receiver also observes ECN-CE markings in incoming packets to detect path congestion. When congestion is detected, the receiver can instruct the sender to change the entropy value, allowing traffic to be steered away from congested paths.

Both sender-side and receiver-side mechanisms ultimately control congestion by limiting the amount of in-flight data, meaning data that has been sent but not yet acknowledged. In UET, this coordination is handled through a Congestion Control Context (CCC). The CCC maintains the congestion control state and determines the effective transmission window, thereby bounding the number of outstanding packets in the network. A single CCC may be associated with one or more PDCs communicating between the same Fabric Endpoint (FEP) within the same traffic class.


Initializing Congestion Control Context (CCC)

When the PDS Manager receives an RMA operation request from the SES layer, it first checks whether a suitable Packet Delivery Context (PDC) already exists for the JobID, destination FEP, traffic class, and delivery mode. If no matching PDC is found, the PDS Manager allocates a new one.

For the first PDC associated with a specific FEP-to-FEP flow, a Congestion Control Context (CCC) is required to manage end-to-end congestion. The PDS Manager requests this context from the CCC Manager within the Congestion Management Sublayer (CMS). Upon instantiation, the CCC initially enters the IDLE state, containing basic data structures without an active configuration.

The CCC Manager then initializes the context by calculating values and thresholds, such as the Initial Congestion Window (Initial CWND) and Maximum CWND (MaxWnd), using pre-defined configuration parameters. Once these initial source states for the NSCC are set, the CCC is bound to the corresponding PDC.

When fully configured, the CCC transitions to the READY state. This transition signals that the CCC is authorized to enforce congestion control policies and monitor traffic. The CCC serves as the central control structure for congestion management, hosting either sender-side (NSCC) or receiver-side (RCCC) algorithms. Because a CCC is unidirectional, it is instantiated independently on both the sender and the receiver.

Once in the READY state, the PDC is permitted to begin data transmission. The CCC maintains the active state required to regulate flow, enabling the NSCC and RCCC to enforce windows, credits, and path usage to prevent network congestion and optimize transport efficiency.

Note: In this model, the PDS Manager acts as the control-plane authority responsible for context management and coordination, while the PDC handles data-plane execution under the guidance of the CCC. Once the CCC is operational, RMA data transfers proceed directly via the PDC without further involvement from the PDS Manager.



Figure 6-6: Congestion Context: Initialization.

Calculating Initial CWND


Following the initialization of the Congestion Control Context (CCC) for a Packet Delivery Context (PDC), specific configuration parameters are used to establish the Initial Congestion Window (CWND) and the Maximum Congestion Window (MaxWnd). 

The Congestion Window (CWND) defines the maximum number of "in-flight" bytes, data that has been transmitted but not yet acknowledged by the receiver. Effectively, the CWND regulates the volume of data allowed on the wire for a specific flow at any given time to prevent network saturation.

The primary element for computing the CWND is the Bandwidth-Delay Product (BDP). To determine the path-specific BDP, the algorithm selects the slowest link speed and multiplies it by the configured base Round-Trip Time (config_base_rtt):

BDP = min(sender.linkspeed, receiver.linkspeed) x config_base_rtt

The config_base_rtt represents the latency over the longest physical path under zero-load conditions. This value is a static constant derived from the cumulative sum of:
  • Serialization delays (time to put bits on the wire)
  • Propagation delays (speed of light through fiber)
  • Switching delays (internal switch traversal)
  • FEC (Forward Error Correction) delays

Setting MaxWnd


The MaxWnd serves as a definitive upper limit for the CWND that cannot be exceeded under any circumstances. It is typically derived by multiplying the calculated BDP by a factor of 1.5.While a CWND equal to 1.0 x BDP is theoretically sufficient to saturate a link, real-world variables, such as transient bursts, scheduling jitter, or variations in switch processing, can cause the link to go idle if the window is too restrictive. UET allows the CWND to grow up to 1.5 x BDP to maintain high utilization and accommodate acknowledgment (ACK) clocking dynamics.

Example Calculation: Consider a flow where the slowest link speed is 100 Gbps and the config_base_rtt is 6.0 µs.

Calculate BDP (Bits): 100 x 109 bps x 0.000006 s = 600,000 bits
Calculate BDP (Bytes): 600,000 / 8 = 75,000 bytes
Calculate MaxWnd: 75,000 x 1.5 = 112,500  bytes

Note on Incast Prevention: While the "ideal" initial CWND is 1.0 x BDP, UET allows the starting window to be configured to a significantly smaller value (e.g., 10–32 KB or a few MTUs). This configuration prevents Incast congestion, a phenomenon where the aggregate traffic from multiple ingress ports exceeds the physical capacity of an egress port. By starting with a conservative CWND, the system ensures that the switch's egress buffers are not exhausted during the first RTT, providing the NSCC algorithm sufficient time to measure RTT inflation and modulate the flow rates.

A common misconception is that the BDP limits the transmission rate. In reality, the BDP defines the volume of data required to keep the "pipe" full. While the Initial CWND may be only 75,000 bytes, it is replenished every RTT. At a 6.0 µs RTT, this volume translates to a full 100 Gbps line rate:

600,000 bits / 6.0 µs = 600,000 / 0.000006 = 100 × 109 bps = 100 Gbps

Therefore, a window of 1.0 x BDP achieves 100% utilization. The 1.5 x BDP (MaxWnd) simply provides the necessary headroom to prevent the link from going idle during minor acknowledgment delays.

Figure 6-7: CC Config Parameters, Initial CWND and MaxWnd.

Calculating New CWND


When the network is uncongested, indicated by a measured RTT remaining near the base_rtt, the NSCC algorithm performs an Additive Increase (AI) to grow the CWND. To ensure fairness across the entire fabric, the algorithm utilizes a universal Base_BDP parameter rather than the path-specific BDP.

The Base_BDP is a fixed protocol constant (typically 150,000 bytes, derived from a reference 100 Gbps link at 12 µs). The new CWND is calculated by adding a fraction of this constant to the current window:

CWND(new) = CWND(Init) + Base_BDP\Scaling Factor

Using a universal constant ensures Scale-Invariance in a mixed-speed fabric (e.g., 100G and 400G NICs). 

If a 400G NIC were to use its own BDP (300,000 bytes) for the increase step, its window would grow four times faster than that of a 100G NIC. By using the shared Base_BDP (150,000 bytes), both NICs increase their throughput by the same number of bytes per second. This "normalized acceleration" prevents faster NICs from starving slower flows during the capacity-seeking phase.

As illustrated in Figure 6-8, consider a flow with an Initial CWND of 75,000 bytes, a Base_BDP of 150,000 bytes, and a Scaling Factor of 1024:

Step Size = 150,000 / 1024 ≈ 146.5  bytes
New CWND = 75,000 + 146.5 = 75,146.5 bytes

Note: Scaling factors are ideally set to powers of 2 (e.g., 512, 1024, 2048, 4096, 8192) to allow the hardware to use fast bit-shifting operations instead of expensive division. 

Higher factors (e.g., 8192): Result in smaller, smoother increments (high stability). 
Lower factors (e.g., 512): Result in larger increments (faster convergence to link rate).

Figure 6-8: Increasing CWND.




No comments:

Post a Comment