View Only
last person joined: yesterday 

Ask questions and share experiences about EX and QFX portfolios and all switching solutions across your data center, campus, and branch locations.
  • 1.  Packet Drops - output error

    Posted 07-03-2023 07:29

    Hi all,

    After receiving user complaints, I took it upon myself to investigate the issue. Upon inspection of my entire network, including EX2300 (two or three units in a cluster) and EX3400 (two units in a cluster), I discovered packet drops occurring on the egress of the users' 1Gb ports. I promptly opened a ticket with Juniper for assistance, but unfortunately, they were unable to provide a solution. They did acknowledge the presence of microbursts, but beyond that, their support was limited.

    In an attempt to mitigate the issue, I configured the following:

    set chassis fpc 0 pic 0 q-pic-large-buffer
    set chassis fpc 1 pic 0 q-pic-large-buffer

    Regrettably, this configuration adjustment did not yield the desired results.

    If anyone has any further insights or suggestions on how to address this problem of packet drops on the egress ports, I would greatly appreciate your assistance.



  • 2.  RE: Packet Drops - output error

    Posted 07-10-2023 03:59

    Hi, nobody any suggestion?


  • 3.  RE: Packet Drops - output error

    Posted 07-11-2023 09:45

    Have you tried this?

    set class-of-service shared-buffer percent 100

    I recommend this setting for all deployments. I have seen drastic improvements by using this.

  • 4.  RE: Packet Drops - output error

    Posted 07-12-2023 09:10


    Yes, I forgot to mention it here. I am using this command on every switch. I was wondering if 100 is the appropriate number. There isn't a specific "magic number" - what is considered best practice? It's not always beneficial to have the maximum value. If I don't use this command, what is the default value?



  • 5.  RE: Packet Drops - output error

    Posted 07-12-2023 10:13
    The default setting is platform specific I think. Other vendors use like 25% and I suspect 25-40% is what Juniper uses. 100% has no negative impact in all normal and even odd use cases, just for "crazy corner cases". If multiple interfaces need the shared pool, they will contend for it, causing semi-fair access to the pool anyway. If that is often the case, you have too little capacity anyway, so you really should upgrade the links.