Switching

 View Only
last person joined: yesterday 

Ask questions and share experiences about EX and QFX portfolios and all switching solutions across your data center, campus, and branch locations.
  • 1.  QFX5K Packet Buffer Architecture [Tech post]

    Posted 07-19-2023 16:41
    Edited by spuluka 10-10-2023 05:39

    Overview

    On QFX5K platforms all packets which enters ingress pipeline will be stored at central MMU packet buffers before it egress. Packet buffers are required for burst absorption, packet replication and for multiple other use cases. Without packet buffers bursty traffic will get tail dropped and leads to poor network performance.  Excess burst might be received on egress port under below scenarios.

    ·      Multiple ingress ports sending traffic to single egress port.

    ·      Traffic from high speed ingress ports to low speed egress ports.

    Packet buffers also essential to implement various congestion management applications like Priority flow control (PFC), Explicit congestion notification (ECN) & WRED.

    QFX5K data-centre switches has shallow on-chip buffer. These switches use Broadcom's Trident and Tomahawk series ASIC. This shallow on-chip buffer provides low latency packet forwarding. Managing this limited packet buffer among different congested queues and providing fair buffer access is very important. This document explains QFX5K buffer architecture in detail.

    Packet Buffers

    On QFX5K platforms, on the chip buffers are measured/allocated in the units of cells.

    Cell size varies in different platforms (refer below table). Packets are stored in multiple cells based on its size. First cell will store 64 bytes of packet meta-data (except QFX5220 where meta-data is only 40bytes) and remining space used to store actual packet. Subsequent cells only used to store packet data.

    Platforms

    Cell size (bytes)

    QFX5100, QFX5110, QFX5200, QFX5210

    208

    QFX5120

    256

    QFX5220, QFX5130

    254

     

    Packets from ingress port placed in MMU buffer before it is scheduled by egress port. With no congestion on egress queues, these packets will be dequeued from MMU as soon as they arrive. At max ~3KB buffer utilization will be seen per port based on the port load.   In case of congestion packets are continued to place in MMU buffers until buffers exhausted.

    Concept of Ingress and Egress buffers

    Buffer memory has separate ingress and egress accounting to make accept, drop, or pause decisions. Because the switch has a single pool of memory with separate ingress and egress accounting, the full amount of buffer memory is available from both the ingress and the egress perspective. Packets are accounted for as they enter and leave the switch, but there is no concept of a packet arriving at an ingress buffer and then being moved to an egress buffer. 

    The MMU tracks all the cells utilization from multiple perspectives to support multiple buffering goals.  These goals typically include things like asserting flow control to avoid packet loss, tail dropping when a resource becomes congested, or randomly dropping to allow stateful protocols to degrade gracefully. Here multiple perspectives mainly mean from ingress and egress perspective.  This is also called as ingress admission control & egress admission control.

    Buffers are accounted by both ingress admission control and egress admission control against different entities like Priority Group (ingress), Queue (egress), Service pool & Port .  All these entities have static & dynamic thresholds for Buffer utilization. Once any of the ingress or egress buffer threshold reached packets are dropped. Packets stored till the threshold will egress out. 

    Desired property for lossy traffic is tail drops on egress queues during congestion. Because ingress drops will cause head of line blocking, which will impact