Log in to ask questions, share your expertise, or stay connected to content you value. Don’t have a login? Learn how to become a member.
I'm trying to better understand how CoS is applied on the network and at which nodes in the network they're meant to be configured.
Currently, the network I'm trying to tweak is composed of EX2300 switches that connect to a pair of non-Juniper L3 switches-which then connect to SRX devices.
I read through the Juniper day one guide for comparing Cisco's QoS to Juniper's CoS and admittedly, I am not well versed in QoS much less Juniper.
That being said, I am aware of what is configured on this network. In a nutshell, the EX switches have no CoS configured at all-meanwhile the SRX devices are using firewall filters to (in my understanding) classify the traffic and the filters are being applied for CoS, forwarding classes have been mapped to queues, and schedulers, scheduler maps have been defined and assigned to the interfaces facing the ISP.
What I am lacking is clarity on why this configuration makes sense or why this does not. The Juniper document mentions concepts I recall from Cisco in regards to traffic shaping and policing which are doing different things-but at the same time, based on the doc, this involves additional configuration aside from what's been mentioned so far.
My assumption (based on that guide) is that setting up the filters and assigning the classes to the queues along with defining the schedulers/maps alone does not do anything for the traffic (i.e. we still need traffic shaping or policing). What's more, my understanding is that with this bare config, the queues are only being applied to outbound traffic (in this case, leaving the SRX). Would that be right?
The reason I'm asking all of this is because my assumption is that the traffic should be getting classified and marked at the switches before they are sent to the core and then to the SRX. It does not seem very clear where in the network you're meant to configure what piece for CoS. Furthering that, it's not very clear what the configuration I've mentioned does at all and for what traffic flows. Are the queues actually doing anything if only adjusting transmission rate and buffer size for the forwarding classes?
The idea is to guarantee and prioritize voice and video traffic on the network and while there is only one site with only those devices listed, I am not very clear on what could be done here.
The basic point of having QoS is to prioritize traffic when they hit links that will be over capacity so you make sure the right flow is first in line or that a big flow does not kill everything else.
In the network you describe where this is only configured facing the ISP, I would guess that the bandwidth limit on that link is the only place in the network with over usage potential so is where they configured the priority. Since they don't need to label the traffic at the edge I would guess that the known interfaces or ip ranges on the SRX were sufficient to identify the queues needed for priority application already and only configuration on the SRX was needed to achieve the goal.
To see if you actually need QoS in the first place, look at the tail drops. Here is a neat way of doing it that works in the EX3400 at least:
> show interfaces ge-* extensive | match "Physical|^( +[0-9]+)+ *$" | except " 0$|Down" Physical interface: ge-0/0/0, Enabled, Physical link is Up 0 5509437 5509125 312Physical interface: ge-0/0/1, Enabled, Physical link is DownPhysical interface: ge-0/0/2, Enabled, Physical link is UpPhysical interface: ge-0/0/3, Enabled, Physical link is Up
This will show you, in the last column, any tail drops. The column headers are filtered out but here they are for you:
Queue counters: Queued packets Transmitted packets Dropped packets
All lines with no drops will be filtered out so only lines ending with something else than a lonely "0" will show. In my case, ge-0/0/0 happened to have 312 dropped frames. If those counters are increasing for you, there is a problem. This is where QoS MAY be a solution, but only for some traffic (the traccif you classify as the most important).
One tip is to allocate all the shared memory in the port buffer pool to all interfaces. This means that if a port needs more than its dedicated buffer (amounting to no more than a few frames), it can borrow memory from the shared pool. Normally the "loan quota" is limited to (IIRC) 40% in the EX2300. Why not let all ports have acces to all shared buffers? I have done extensive testing with shred buffers and in no case, real-worls or lab, have i seen anything negative with allowing all ports to use all shared buffer memory. Here is how you do it:
set class-of-service shared-buffer percent 100
show interfaces queue will show you some more detail, but not too much in the EX2300/3400 unfortunately. In a QFX you can see a lot more like peak queue depth etc. not to mention the MX :)