Log in to ask questions, share your expertise, or stay connected to content you value. Don’t have a login? Learn how to become a member.
Basic issue is an auditorium with Crestron "encoders" for various video sources that output to local multicast IPs, and a "decoder" that plays the selected stream to the projector. So far, just an L2 implementation on a single VLAN, with "set protocols igmp-snooping vlan all" on the switches involved, and one specified as an L2 querier. (All traffic is within just a "Wired-AV" VLAN, so not clear if we need PIM or IGMP, but maybe so?)
When the querier is the local switch in the AV rack, all is fine. When the querier is any upstream switch, then all the local video feeds are fed up to the querier, even though there are no other hosts that want these groups. I had naively assumed that only group information would be sent to the querier, not the actual data streams, unless someone else wanted them. But as far as I can tell, the querier receives all the streams, and then just drops them, so kind of pointless bandwidth.
We have a QFX 5100 running 20.2R3.9 as an L2/L3 aggregation layer switch, and it feeds a bunch of 3400s on individual floors, with the 3400s running 21.4R3-S3.4. Uplinks from the 3400s to the QFX are dual 10G ae trunk links, and the QFX has the irb gateway IPs for each VLAN, and also does DHCP relaying. So, made sense to me to also have it be the querier for the vlans where needed. However, when the QFX is querier, the various encoder streams are sent to it--it is easy to tell just by the bps counts on the uplink, but I also did a brief packet capture and confirmed that lots of the flow was from the multicast encoder sources to their group IPs. The QFX does not show any other clients for these streams, and in one test case, it does not have any other interfaces with the VLAN that the encoders are on. When I take the querier out of the qfx, and add an irb and IP to the subnet to the access switch with the encoders, and make it the querier, then the multcast traffic on the uplink stops, though all still works well locally.
Is there some requirement that all streams go to the querier? Would PIM and/or IGMP config in the QFX help?For now, our IGMP config is dead simple, L2 only. In the QFX we have the VLAN definitions, and the snooping for all vlans. In the switch in the auditorium, we have igmp snoooping and the L2 querier:QFX:
set protocols igmp-snooping vlan all immediate-leave
set vlans Wired-AV vlan-id 162
set vlans Wired-AV l3-interface irb.162
set interfaces irb unit 162 description Wired-AV
set interfaces irb unit 162 family inet address 10.26.32.1/24
set protocols ospf area 10.16.128.0 interface irb.162 passive
set forwarding-options dhcp-relay group internal-standard interface irb.162
set interfaces ae11 unit 0 family ethernet-switching vlan members Wired-AV ## this is the interface to the 3400, and is the only spot this VLAN is used
set protocols igmp-snooping vlan all
set protocols igmp-snooping vlan Wired-AV l2-querier source-address 10.26.32.3
set protocols igmp-snooping vlan Wired-AV immediate-leave
set vlans Wired-AV vlan-id 162
set interfaces irb unit 162 family inet address 10.26.32.3/24
The above config works, with no extra traffic on the uplink. If I take out the l2-querier from the 3400, and add in
set protocols igmp-snooping vlan Wired-AV l2-querier source-address 10.26.32.1
to the QFX (and then "restart multicast-snooping" on both of them) then all the sources are sent to the QFX, even though it doesn't need them. What am I missing?(So for now, a possible work-around is to have the querier for each subnet on the switch where the devices are, but this is kinda ugly, as it means a bunch extra unique subnets, and a bunch of extra irbs out on the L2 access switches.)
Thanks for any clues, or knowledge like "this is how it is supposed to work" or "you need a L3 IGMP config also"
Here's a diagram showing the arrangement. I don't get why the querier needs all the data flows, rather than just knowing that these multicast groups are available if anyone wants them.
All of the multicast traffic is local within the av-rack switch. Each source is 300 to 600 Mbps, and takes a video input and makes it into a multicast stream with the specified IP. The projector input unit only joins one group at a time.
When the av-rack switch is querier, none of the multicast traffic goes out its uplink, as there are no hosts elsewhere that want these groups.
When switch 216-b is querier, the multicast flows from all three sources are sent on the uplink from the av-rack switch, and then dropped at the 216-b switch. Again, no hosts anywhere except av-rack switch want this traffic.
When the QFX is querier, all three sources are sent from the av-rack switch to 216-b, and then sent on the uplink to the QFX. The QFX drops all this traffic, as no one wants it.
Does all potential multicast source traffic need to go to the querier? Seems like a big waste of bandwidth!
Yep, «this is how it is supposed to work».
Long explanation: when a switch (or the Layer2 part of an L3 switch) sees an IGMP querier, or an IGMP router, or a PIM router, it actually sees an «mrouter». Therefore the port connected to it is defined as «mrouter port» (on a downstream switch it will be also the upstream toward another switch plugged to the mrouter, btw). By definition an mrouter port will always get all the multicast streams inside an L2 LAN, this is mandatory.
And in the meanwhile, for IGMP snooping to work, you absolutely need at least an IGMP Querier (or better, that is a PIM router doing IGMP router too).
So, it works as intended.
The rationale is that:
1) an mrouter must be able to see all the multicast flows in a LAN containing multicast sources, in order to be able to notify of their existence the other multicast routers in the network ; and:
2) there's no provision to clearly identify which kind of mrouter is behind an «mrouter port» (as an IGMP querier only obviously won't advertise anyone of the mutlicast sources it sees). So it works this way anyway, whereas in this case that's a bit annoying, but «this is by design in the RFCs».
Dang. Thanks for the clear explanation.
So unless I route layer 3 only out to every access switch, then _all_ of my local multicast streams are required to be sent back to the core? Oof. Assuming about 1Gbps of streams from every space that has these crestron rigs, seems like potentially a bunch of extra packets for the QFX to deal with.
I was sorta hoping the access switches could say, "I have these groups active out here, let me know if you want any of them (by joining the group) and then I will send It to you".
Instead, the design is, "here, take all this data, and then you can decide if anyone anywhere wants to join any of these groups." I am sure there is a good reason for that, but it seems wasteful. I thought the point was that multicast streams only got sent where they were wanted.
------------------------------Olivier BenghoziOriginal Message:Sent: 11-26-2023 12:43From: skbSubject: All streams get sent to IGMP querier, even with no listeners. (Is there a better way?)
You got the point: in order to have multicast streams only sent where they are wanted, you use L3 and route (and use PIM-SM on the L3 routing, and IGMP in each (v)LANs).
Another argument against big LANs, actually.
Thanks again! But to confirm, even if each QFX was part of a PIM-SM network, any 3400s connected to that QFX would send every multicast source on the 3400 up to its QFX? Or would PIM at the aggregation layer somehow be able to prevent that?
Currently our three aggregation QFXes route to each other and to our core routers to outside, but then have L2 connections to the 3400 switches on the floors they serve. Not sure I want all my 3400s to be L3 connections back to their core QFX, but I suppose they could be. Right now the 3400s don't even have IPs on anything except our switch management VLAN, but I guess we could have them route their AV VLANs. Lots of annoying config like DHCP relay that right now lives on the cores rather than out on the access switches, but would need to change with routed links. (But, I'm probably behind the times, and everything should be routed.)
Yes, if the EX3400 (or any switch doing IGMP snooping) sees an mrouter on one interface (that is, sending IGMP membership queries, PIM packets, or whatever) inside a vlan, it will sends it all the multicast flows present in the vlan, as this is by design (of RFC, not of Juniper).
By the way, IGMP snooping+queryer is a thing... but true multicast routing (PIM and so on) theoretically needs a licence on EX/QFX :P
Anyway, if you sources are in a dedicated sources VLAN, on an equipment having an L3 interface (IRB?) inside the vlan doing PIM-SM/IGMP, and the vlan containing the sources isn't layer2 speaking propagated, then you avoid this multicast flood, of course.
Actually if the QFX will receive the multicast flows anyway, it's not a big deal to have them configured as PIM routers inside the sources vlan (and let them take care of the IGMP stuff).
And of course this supposes that the multicast flows are transmitted toward the receivers over L3 Point to Point interfaces between equipments (such interfaces can actually be simply dedicated point to point vlans + IRB or dot1q tagged subinterfaces). No undesired flood anymore
Note, not relevant in your case: would you have a vlan with 3 PIM routers inside and nothing else (that is, a transit vlan for multicast flows whose sources are somewhere else), if two of them exchange some multicast flows, the third one would receive all the flows too, because mrouter, by design, and so on. For such cases (transit vlans – and not source vlans !!!) some protocol «pim snooping» exist in Cisco, but it's not relevant at all for you (you have some sources in your vlan and no intention to mix plenty of PIM routers in a transit only vlan).