Calculating Base RTT
[Edit: January 7 2026, RTT role in CWND adjustment process]
As described in the previous section, the Bandwidth-Delay Product (BDP) is a baseline value used when setting the maximum size (MaxWnd) of the Congestion Window (CWND). The BDP is calculated by multiplying the lowest link speed among the source and destination nodes by the Base Round-Trip Time (Base_RTT).
In addition to its role in BDP calculation, Base_RTT plays a key role in the CWND adjustment process. During operation, the RTT measured for each packet is compared against the Base_RTT. If the measured RTT is significantly higher than the Base_RTT, the CWND is reduced. If the RTT is close to or lower than the Base_RTT, the CWND is allowed to increase.
This adjustment process is described in more detail in the upcoming sections.
The config_base_rtt parameter represents the RTT of the longest path between sender and receiver when no other packets are in flight. In other words, it reflects the minimum RTT under uncongested conditions. Figure 6-7 illustrates the individual delay components that together form the RTT.
Serialization Delay: The network shown in Figure 6-7 supports jumbo frames with an MTU of 9216 bytes. Serialization delay is measured in time per bit, so the frame size must first be converted from bytes to bits:
9216 bytes × 8 = 73,728 bits
Serialization delay is then calculated by dividing the frame size in bits by the link speed. For a 100 Gbps link:
73,728 bits / 100 Gbps = 0.737 µs
Note: In a cut-through switched network, which is standard in modern 100 Gbps and above data center fabrics, the switch does not wait for the full 9216-byte frame to arrive before forwarding it. Instead, it processes only the packet header (typically the first 64–128 bytes) to determine the destination MAC or IP address and immediately begins transmitting the packet on the egress port. While the tail of the packet is still arriving on the ingress port, the head is already leaving the switch.
This behavior creates a pipeline effect, where bits flow through the network similarly to water through a pipe. As a result, when calculating end-to-end latency from a first-bit-in to last-bit-out perspective, the serialization delay is effectively incurred only once—the time required to place the packet onto the first link.
Propagation delay: The time it takes for light to travel through the cabling infrastructure. In our example, the combined fiber-optic length between Rank 0 on Node 1A and GPU 7 on Node A2 is 50 meters. Light travels through fiber at approximately 5 ns per meter, resulting in a propagation delay of:
50 m × 5 ns/m = 250 ns = 0.250 µs
Switching Delay (Cut-Through): The time a packet spends inside a network switch while being processed before it is forwarded. This latency arises from internal operations such as examining the packet header, performing a Forwarding Information Base (FIB) lookup to determine the correct egress port, and updating internal buffers and queues.
In modern cut-through switches, much of this processing occurs while the packet is still being received, so the added delay per switch is very small. High-end 400G switches exhibit cut-through latencies on the order of 350–500 ns per switch. For a path traversing three switches, the total switching delay sums to approximately:
3 × 400 ns ≈ 1.2 µs
Thus, even with multiple hops, switching delay contributes only a modest portion to the total Base RTT in 100 Gbps and above data center fabrics.
Forward Error Correction(FEC) Delay: Forward Error Correction (FEC) ensures reliable, “lossless” data transfer in high-speed AI fabrics. It is required because high-speed optical links can experience bit errors due to signal distortion, fiber imperfections, or high-frequency signaling noise.
FEC operates using data blocks and symbols. The outgoing data is divided into fixed-size blocks, each consisting of data symbols. In 100G and 400G Ethernet FEC, one symbol = 10 bits. For example, a 514-symbol data block contains 514 × 10 = 5,140 bits of actual data.
To detect and correct errors, the switch or NIC ASIC computes parity symbols from the data block using Reed-Solomon (RS) math and appends them to the block. The combination of the original data and the parity symbols forms a codeword. For example, in RS(544, 514), the codeword has 544 symbols in total, of which 514 are data symbols and 30 are parity symbols. Each symbol is 10 bits, so the 30 parity symbols add 300 extra bits to the codeword.
At the receiver, the codeword is checked: the parity symbols are used to detect and correct any corrupted symbols in the original data block. Because RS-FEC operates on symbols rather than individual bits, if multiple bits within a single 10-bit symbol are corrupted, the entire symbol is corrected as a single unit.
The FEC latency (or accumulation delay) comes from the requirement to receive the entire codeword before error correction can begin. For a 400G RS(544, 514) codeword:
• 544 symbols × 10 bits/symbol = 5,440 bits total
• At 400 Gbps, this adds a fixed delay of ~150 ns per hop
This delay is a “fixed cost” of high-speed networking and must be included in the Base RTT calculation for AI fabrics. The sum of all delays gives the one-way delay, and the round-trip time (RTT) is obtained by multiplying this value by two. The config_base_rtt value in figure 6-7 is the RTT rounded to a safe, reasonable integer.
No comments:
Post a Comment