Routing

 View Only
last person joined: 2 days ago 

Ask questions and share experiences about ACX Series, CTP Series, MX Series, PTX Series, SSR Series, JRR Series, and all things routing, including portfolios and protocols.
Expand all | Collapse all

Issue with simple multicast ping

  • 1.  Issue with simple multicast ping

    Posted 10-22-2019 21:08

    morning folks,

     

    I have had multicast working in the past, but on revisiting I am not able to get even a response to a simple ping.

    I am running multiple vmxs on 15.1 - I imagine I am just forgetting something here, hopefully a nudge in the right direction is all i need

     

    Source:

        run ping 239.0.0.1 source 3.3.3.3 bypass-routing interface ge-0/0/3 ttl 10 count 5

     

    Receiver:
        set protocols sap listen group 239.0.0.1
        set protocols igmp interface [int | all] static group 239.0.0.1

     

    Where I run Pim I do see the join filtering upwards but still no response from receiver. The routing table is fully converged. One point that comes to mind is 'chassis fpc 0 pic 0 tunnel-services bandwidth 1g' - I have this in notes but it is unclear if this goes on everyone particapating in multicasting? just the source? everyone but the receiver?
    I notice that it also affects auto-rp negatively, so should it just be configured on customer-facing interfaces? if so how would I handle a vmx vFP with only one pic?

    thanks all

     

     

     

     

     

     

     



  • 2.  RE: Issue with simple multicast ping

    Posted 10-22-2019 23:02

    Hello,

     


    @byron.moore wrote:

     

    Receiver:
        set protocols sap listen group 239.0.0.1
        set protocols igmp interface [int | all] static group 239.0.0.1

     

    Where I run Pim I do see the join filtering upwards but still no response from receiver.

     

     

    Last time I tested it around ~2 short years ago, "protocols sap listen" was broken in the GRT but worked in the Logical System.

    JUNOS was 17.2R<something>.

    I haven't got to track down from what exact JUNOS release it stopped working so please try putting "protocols sap listen" into LS, re-test and report back.

    Thanks

    HTH

    Akex

     



  • 3.  RE: Issue with simple multicast ping

    Posted 10-23-2019 08:03

    thanks for your reply,

     

    I tried running the above on logical-systems and I could not get an improvement

    No pings returned from an adjacent neighbor or from within the logical system itself

     

    Now the version i was using the past may have changed, so I'll try a new version when I get home

    Thanks for the suggestion.

     

    In them meantime would you think that the presence or absence of tunnel-service bandidth could have an effect?



  • 4.  RE: Issue with simple multicast ping

     
    Posted 10-23-2019 08:30

    Hi,

    For - "In them meantime would you think that the presence or absence of tunnel-service bandidth could have an effect?"

    You need tunnel-services on FHR, LHR and RP. It does not need to be on specific FPC/PIC. Any available FPC/PIC can be used.

     

    Thanks,

    Mayank Shah



  • 5.  RE: Issue with simple multicast ping

    Posted 10-23-2019 08:36

    Hello,

     


    @mkshah wrote:

     

    You need tunnel-services on FHR, LHR and RP. It does not need to be on specific FPC/PIC.


     

    LHR has nothing to do with PIM REGISTER which requires tunnel services.

    And if Your RP is on FHR, then tunnel services are not required at all.

     

    HTH

    Thx

    Alex



  • 6.  RE: Issue with simple multicast ping

    Posted 10-23-2019 11:47

    Thanks for the help

     

    I have deployed 18.2 version vMX's and am still not getting either adjacent neighbor pings through or responses from within the chassis itself. Very simple configuration in use, receiver pasted in below, source is similar without igmp or sap. ping command is "run ping 239.0.0.1 ttl 10 bypass-routing interface ge-0/0/0.0 source 1.1.1.1", similar ping when run on the receiver. Chassis Network-service is enhanced-ip. one control-plane one forwarding-plane. Unicast scope pings are working, ospf is converged, source igmp is aware of the static groups on receiver. Using a vmx trial license newly installed.

     

    set chassis fpc 0 lite-mode
    set chassis network-services enhanced-ip
    set interfaces ge-0/0/0 unit 0 family inet address 10.1.2.2/24
    set interfaces lo0 unit 0 family inet address 2.2.2.2/32
    set protocols igmp interface ge-0/0/0.0 static group 239.0.0.1
    set protocols sap listen 239.0.0.1
    set protocols sap listen 225.0.0.1
    set protocols sap listen 230.0.0.1
    set protocols sap listen 235.0.0.1
    set protocols ospf area 0.0.0.0 interface ge-0/0/1.0
    set protocols ospf area 0.0.0.0 interface ge-0/0/0.0
    set protocols ospf area 0.0.0.0 interface lo0.0
    

    Any more ideas?

    Thanks for all the help



  • 7.  RE: Issue with simple multicast ping
    Best Answer

    Posted 10-24-2019 22:52

    Hello,

     

    I labbed up Your setup and in order to have multicast pings replied to, You need to enable PIM on the receiver' ingress interface.

    PIM-DM should suffice.

    IGMP is not enough.

    HTH

    Thx

    Alex



  • 8.  RE: Issue with simple multicast ping

    Posted 10-30-2019 10:29

    Thanks for the reply

    and huge thanks for labbing it up and taking a look

    I had given up on replicating pings but I will give that a try as soon as I can
    IE test next week - I feel like multicasting is still my weakest subject having been unable to verify my configurations

    thanks again



  • 9.  RE: Issue with simple multicast ping

    Posted 12-11-2020 06:09
    I have exactly the same issue as described by byron.moore. PIM on the receiver(vMX running Junos 18.2R1.9) ingress interface is enabled and the router control plane receives the ICMP "echo request" messages, but does not send ICMP "echo reply" as a response:
    root@r8> monitor traffic interface ge-0/0/0 no-resolve matching icmp
    verbose output suppressed, use <detail> or <extensive> for full protocol decode
    Address resolution is OFF.
    Listening on ge-0/0/0, capture size 96 bytes
    
    10:44:35.345907  In IP 10.10.111.1 > 225.1.1.1: ICMP echo request, id 51945, seq 33, length 64
    10:44:36.351178  In IP 10.10.111.1 > 225.1.1.1: ICMP echo request, id 51945, seq 34, length 64
    10:44:37.406408  In IP 10.10.111.1 > 225.1.1.1: ICMP echo request, id 51945, seq 35, length 64
    10:44:38.355660  In IP 10.10.111.1 > 225.1.1.1: ICMP echo request, id 51945, seq 36, length 64
    ^C
    8 packets received by filter
    0 packets dropped by kernel
    
    root@r8>
    ​


    I also made sure, that the FreeBSD net.inet.icmp.bmcastecho is set to 1:

    root@r8> start shell sh
    # sysctl net.inet.icmp.bmcastecho
    net.inet.icmp.bmcastecho: 1
    # exit
    
    root@r8>
    

    Still, for some reason, the vMX did not reply to ICMP "echo request" messages addressed to multicast address.

    As a workaround, I joined to multicast group on a host machine:
    martin@lab-svr:~$ ip a sh dev ge-0.0.1-r8
    634: ge-0.0.1-r8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN
    group default qlen 1000
        link/ether fe:78:dd:0e:ff:f1 brd ff:ff:ff:ff:ff:ff
        inet 10.10.13.2/24 scope global ge-0.0.1-r8
           valid_lft forever preferred_lft forever
        inet6 fe80::fc78:ddff:fe0e:fff1/64 scope link
           valid_lft forever preferred_lft forever
    martin@lab-svr:~$
    martin@lab-svr:~$ sudo socat STDIO UDP4-DATAGRAM:225.1.1.1:5000,ip-add-membership=225.1.1.1:10.10.13.2
    ​

    One can confirm the membership status with "ip maddr":

    martin@lab-svr:~$ ip -4 maddr sh dev ge-0.0.1-r8
    634:    ge-0.0.1-r8
            inet  225.1.1.1
            inet  224.0.0.251
            inet  224.0.0.1
    martin@lab-svr:~$
    ​


    Also, the value of the /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts in Linux has to be 0 and iptables/netfilter has to be configured properly.




  • 10.  RE: Issue with simple multicast ping

    Posted 12-11-2020 06:53
    Some general guidelines:

    1. Ideally I would enable tunnel-services on all routers (event though is not required), but this is a lab anyways.
    2. Make sure the router is capable to reply to multicast pings ('set system no-multicast-echo' must not be present)
    3. Check whether the shared/source trees are built or not.
    4. Multicast pings are sent with TTL=1 by default, try increasing TTL according to topology size when testing.

    Regards,

    Elvin


  • 11.  RE: Issue with simple multicast ping

    Posted 12-11-2020 07:24

    ElvinArias, thanks! Turned out, that in my case the TTL was too low. I generated multicast traffic with "ping ttl 5 225.1.1.1 interface ge-0/0/0.0 bypass-routing" and as I saw the traffic in the receiver, then I assumed that the TTL is not an issue. Incrementing the TTL to at least 6 did the trick for my multicast topology:

    root@r1> ping ttl 5 225.1.1.1 interface ge-0/0/0.0 bypass-routing
    PING 225.1.1.1 (225.1.1.1): 56 data bytes
    ^C
    --- 225.1.1.1 ping statistics ---
    4 packets transmitted, 0 packets received, 100% packet loss
    
    root@r1> ping ttl 6 225.1.1.1 interface ge-0/0/0.0 bypass-routing
    PING 225.1.1.1 (225.1.1.1): 56 data bytes
    64 bytes from 10.10.11.3: icmp_seq=0 ttl=61 time=7.407 ms
    64 bytes from 10.10.11.3: icmp_seq=1 ttl=61 time=83.079 ms
    ^C
    --- 225.1.1.1 ping statistics ---
    2 packets transmitted, 2 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 7.407/45.243/83.079/37.836 ms
    
    root@r1>
    

    By the way, the "set system no-multicast-echo" toggles this very same FreeBSD net.inet.icmp.bmcastecho kernel variable.


  • 12.  RE: Issue with simple multicast ping

    Posted 12-11-2020 11:36
    Glad that helped!

    Elvin