Thursday, 5 February 2026

Ultra Ethernet: Receiver Credit-based Congestion Control (RCCC)

 Introduction

Receiver Credit-Based Congestion Control (RCCC) is a cornerstone of the Ultra Ethernet transport architecture, specifically designed to eliminate incast congestion. Incast occurs at the last-hop switch when the aggregate data rate from multiple senders exceeds the egress interface capacity of the target’s link. This mismatch leads to rapid buffer exhaustion on the outgoing interface, resulting in packet drops and severe performance degradation.


The RCCC Mechanism

Figure 8-1 illustrates the operational flow of the RCCC algorithm. In a standard scenario without credit limits, source Rank 0 and Rank 1 might attempt to transmit at their full 100G line rates simultaneously. If the backbone fabric consists of 400G inter-switch links, the core utilization remains a comfortable 50% (200G total traffic). However, because the target host link is only 100G, the last-hop switch (Leaf 1B-1) becomes an immediate bottleneck. The switch is forced to queue packets that cannot be forwarded at the 100G egress rate, eventually triggering incast congestion and buffer overflows.

While "incast" occurs at the egress interface and can resemble head-of-line blocking, it is fundamentally a "fan-in" problem where multiple sources converge on a single receiver. Under RCCC, standard Explicit Congestion Notification (ECN) on the last-hop switch's egress interface is typically disabled for this traffic class. The reasoning is twofold:

Redundancy: In Ultra Ethernet, ECN is the primary signal for NSCC to adjust the Congestion Window (CWND) and rotate the Entropy Value (EV) to trigger packet-level load balancing across the fabric.

Path Convergence: At the last-hop switch, rotating the EV is ineffective because there is only a single physical path to the destination. Since RCCC provides a more granular, proactive mechanism to throttle senders based on the receiver's actual capacity, the reactive "slow down" signaling of ECN becomes unnecessary at this stage. By disabling ECN here, the receiver (Target) takes full responsibility for flow management, ensuring that the fabric remains clear of congestion markers that might otherwise trigger unnecessary path hunting.


Credit Allocation and Flow

Instead of relying on late-stage ECN signaling, the RCCC algorithm proactively throttles senders by granting credits that match the physical transport speed of the target's connection.

Discovery: When Rank 2 receives data, it identifies the sources via the CCC_ID field in the RUD_CC_REQ (the specific request type used when RCCC is enabled) and adds them to its Active Sender Table.

Calculation: The algorithm divides the total available bandwidth, for a 100Gbps link, this is roughly 12.5 GB/, among the active senders. In this example, each sender is allocated 6.25 GB/s (50Gbps) worth of credits.

Granting: These credits are transmitted back to the sources via ACK_CC packets once data is successfully committed to Rank 2’s memory.

Enforcement: Upon receiving the ACK_CC, the Congestion Control Context (CCC) associated with the sender’s Packet Delivery Control (PDC) updates its local credit table. The PDC only permits transmission based on these available credits, effectively capping the individual sender's rate at 50G. This ensures that when combined with the other sender, the aggregate rate at the receiver does not exceed its 100G link capacity.

This credit-grant loop is continuous. The RUD_CC_REQ carries "backlog" information, telling the target exactly how much data is waiting in the source's queue. By dynamically adjusting grants based on this feedback, RCCC ensures the backend network remains lossless.

Figure 8-1: RCCC: Destination Flow Control.


Source RCCC Operation


The RCCC operation from the perspective of source UET Node-A begins when an application on Rank 0 initiates a 256 MB Remote Memory Access (RMA) write operation toward Rank 2. This request is handled by the Semantic Sublayer (SES), which translates the high-level command into  ses_pds_tx_req for the Packet Delivery Sublayer (PDS). In our example, the PDS Manager determines that no communication channel currently exists between the Fabric Endpoints used for these connections, so it allocates a new Packet Delivery Control (PDC) from its general pool with the PDC identifier 0x4001. Simultaneously, it requests a Congestion Control Context (CCC) from the Congestion Management System (CMS), resulting in a dedicated context, CCC_ID = 0xA1, being configured and bound to the new PDC.

Once PDC and CCC are established, the system tracks the pending data through a two-tier backlog system. In our example, PDC 0x4001 updates its delta backlog with the full 256 MB of the request, which is then added to the CCC’s global backlog. This global value represents the total volume of data currently waiting for transport across all PDCs managed by that specific context. Because this is the start of the transaction, the global backlog moves from zero to 256 MB, establishing the total "demand" the source is prepared to place on the network.

In our example, new contexts are pre-provisioned with initial credits scaled to the Bandwidth-Delay Product (BDP). While the theoretical capacity of a 100G link is 12.5 GB/s, the initial "pipe-cleaning" burst is much smaller, specifically 12.5 KB in this scenario. This value represents a safe, conservative fraction of the total BDP, ensuring that the source can trigger the feedback loop without the risk of overwhelming the receiver's buffers or the last-hop switch before the control loop fully engages. The CCC authorizes PDC 0x4001 to transmit this initial amount, subtracts it from the current cumulative credits, and updates the global backlog to show that this small portion is now in-flight, leaving 255.987.500 bytes remaining in the queue.

With this authorization from the CCC, the PDC passes the work request to the NIC, which fetches the data from memory and prepares the packet for transmission. In our example, the FEP Fabric Addresses (FA) are encoded into the IP header’s source and destination IP address fields, and the DSCP bits are configured to correspond to the TC-LOW traffic class. Additionally, the ECN bits are set to reflect that the packet is ECN-capable, ensuring visibility for Network Signaled Congestion Control (NSCC) if needed. The type of the PDS request is set to RUD_CC_REQ, which requires a pds.req_cc_state field. In our example, this field carries the CCC_ID (0xA1) and the Credit Target, which describes the size of the backlog of the sender CCC. By including these parameters, the source explicitly informs the target of its total remaining data, allowing the receiver to calculate and return the next set of credit grants to keep the pipeline moving.

Note: Since the source does not yet have information regarding the PDC on the remote target, it populates the pdc_info field with a value of 0x0 for the Destination PDC ID (DPDCID) for notifying the target that its new PDC ID must be taken from the global PDC pool. Furthermore, the SYN bit remains set until the first ACK_CC message is received, signaling to the target that the connection handshake and credit-granting loop are in the initialization phase.


Figure 8-2: Source RCCC Processing.

Target RCCC Operation – PDS Request


When the initial packet arrives at destination Node-B, the PDS Manager first checks for an existing PDC associated with the incoming connection from Fabric Address (FA) 10.0.0.1 and SPDCID 0x4001. Because no such PDC exists, the PDS Manager identifies this as a new connection request. The value of 0x0 in the pdc_info field instructs the target to allocate a General type PDC, ensuring the local delivery control matches the source's PDC type.

Since no Congestion Control Context (CCC) currently exists for this specific FEP-to-FEP connection, the PDS requests the CMS to allocate a new one. The CMS assigns CCC_ID 0xB1 and creates an entry source CCC_ID-specific entry in the Active Sender Table. This entry describes the source address (FA 10.0.0.1), and the assigned traffic class (TC-LOW) from the IP header. Besides, entry for source CCC_ID 0xA1 described in PDS headers tells the source backlog size as credit_target with value 255.987.500. 

Simultaneously, the NIC extracts the semantic information from the SES header to identify the required operation. In our example, it recognizes a UET_WRITE command and determines the target memory address for the incoming data. Once the packet payload is verified, the data is forwarded to the High-Bandwidth Memory (HBM) Controller, where it waits for its turn to be committed to the physical memory.


Figure 8-3: Target RCCC Processing – PDS Request.


Target RCCC Operation – Credit Assignment


After receiving confirmation from the SES regarding the completed memory operation, the PDS prepares the response using an ACK_CC message. The CMS must now determine how much data the source is permitted to send in its next burst. In our example, the CMS allocates 12.5 KB of credits for CCC_ID 0xA1.

The math behind this allocation is a function of the receiver’s total capacity and the time-granularity of the control loop. While the NIC provides a 100 Gbps (12.5 GB/s) "pipe," the receiver does not grant a full second of data at once, as doing so would bypass the congestion control mechanism. Instead, it grants data in "time-slices", in this scenario, representing 1 microsecond of transmission. By dividing the total bandwidth by the number of active senders for that specific time-slice, the receiver ensures that the aggregate "demand" never exceeds the physical capabilities of the link.

In our example, with only one active sender, the calculation is:

(12.5 GB/s x 0.000001 s) ÷ active senders = 12.5 KB

The RCCC algorithm is designed for dynamic fairness. Though not explicitly shown in Figure 8-4, if Rank 0 had a simultaneous transfer in progress, the Active Sender Table would list two sources. The CMS would then divide that same 1-microsecond "slice" between them, reducing the granted credit per source to 6.25 KB. This prevents "incast" congestion by ensuring that even if multiple sources transmit at once, their combined throughput matches exactly what the receiver can ingest.

The PDS defines this response by setting the pds.cc_type to CC_CREDIT. The pds.ack_cc_state field is populated with this calculated credit value, while the ooo_count field tracks any Out-of-Order packets. To ensure this information is not delayed by standard data traffic, the DSCP bits in the IP header are set to TC-High. This gives the ACK_CC message "express" priority across the backend fabric, minimizing the time the source spends waiting for new credits and maintaining a high-performance, steady-state flow.

Crucially, the Target populates its own local PDC ID (0x4011) into the Source PDC Identifier (SPDCID) field of the PDS Prologue header. By doing so, it provides the return address necessary for the source to transition out of its initial "discovery" state.

Figure 8-4: Target RCCC Processing – ACK_CC Message Reply.


Source-Side Processing of ACK_CC


When the ACK_CC message arrives at the source, the NIC identifies the target FEP based on the destination IP address. However, for high-speed internal processing, it uses the DPDCID in the PDS header as a local handle to jump directly to the correct PDC Context. From this entry, the NIC automatically resolves the CCC_ID associated with that specific PDC.

Once the correct CCC entry is identified, the source processes the new credit information. In our example, the receiver has sent a new Cumulative Credit value of 25,000 bytes. To determine the currently available window, the source performs a simple subtraction: 

Incremental Credit = Received Cumulative Credit – Local Cumulative Credit

By subtracting the previously recorded 12,500 bytes from the new 25,000 bytes, the source identifies an incremental grant of 12,500 bytes. The CCC then authorizes the PDC to transmit this amount. Simultaneously, the Global Backlog is updated by subtracting these 12,500 bytes from the remaining 255,987,500 bytes, keeping the sender’s demand signal accurate for the next request.

The PDC informs the NIC that it is cleared to construct packets fitting this allowed credit size (respecting the NIC’s MTU). The NIC fetches the data from memory, packetizes it, and transports it to the destination. This control loop continues—updating demand and receiving cumulative grants—until the entire backlog has been transported and acknowledged.

Once the job is complete, the PDC context is closed. If no other PDCs are currently associated with that CCC_ID, the CCC is also closed. This hierarchical teardown ensures that no unnecessary hardware resources or bandwidth are reserved in the AI Fabric once the work is done.