SRX

Expand all | Collapse all

Pim dense mode on SRX and (*,G) State creation

Jump to Best Answer
  • 1.  Pim dense mode on SRX and (*,G) State creation

    Posted 10-08-2017 14:08

    Hi everyone,

     

    Source 199.199.199.1-----  tun10 -Cisco-fe0/0/4---- Receiver ( 239.1.1.1)

     

    On Cisco :

     

    Source is not sending any multicast stream yet

    As soon as LHR  receives IGMP report about 239..1.1.1, (*,239.1.1.1) is created that can be seen with show ip multicast route i.e we do not need to receive multicast stream from source to create (*,239.1.1.1) state. It is created as result of IGMP.

     

    Now if we use the same setup on SRX:

    Again source is not sending any multicast stream yet

    Source 199.199.199.1-----  tun10 -SRX-fe0/0/4---- Receiver ( 239.1.1.1)

     

    I noticed following:

     

    1) ( *,239.1.1.1.)  state is not created in multicast table eventhough SRX is able to discover using IGMP , receiver is interested in 239.1.1.1, that can be seen using show igmp but multicast table does not show entry.

     

    Now if contrast this with Cisco, Cisco creates ( *,239.1.1..1) state in multicast routing table once it receives IGMP report for 239.1.1.1 from Receiver.

     

    Is that normal behavior on SRX?

     

     

    2) Source  199.199.199.10 is sending stream at 239.1.1.1

     

    SRX creates a ( S,G) state i.e ( 199.199.199.10, 239.1.1.1) which can be seen by using  show multicast route.

     

    This behavior is common between Cisco and SRX.

     

     

    3)  Do we have any command on SRX that can show me the number pf packet dropped because of RPF failure?

     

     

    Thanks and have a nice weekend!!

     

     

     

     



  • 2.  RE: Pim dense mode on SRX and (*,G) State creation
    Best Answer

     
    Posted 10-09-2017 09:44

    1. Its not normal. SRX is supposed to show (*,G) when it recive IGMP join. You can try using static-igmp mapping for test purpose.

     

    2. Thats expected behavior

     

    3. AFAIK, there is no such command.

     

     

     

     



  • 3.  RE: Pim dense mode on SRX and (*,G) State creation

    Posted 10-09-2017 10:09
    In my setup as I stated in a different post I had a multicast address of 239.255.255.250* . This is ideal because it has loopback behavior. The star is at the end and not separated by any delimiters. I use Comcast. My 224's are present too. There are as many as (4) 224 addresses and more, to control the influx of 239's. Some are dynamic. Prolly means they are being flooded and more are created.


  • 4.  RE: Pim dense mode on SRX and (*,G) State creation

    Posted 10-09-2017 10:11
    I have a dhcp environment though.... At my wan interface. Ie, ge-0/0/0 .


  • 5.  RE: Pim dense mode on SRX and (*,G) State creation

    Posted 10-10-2017 02:51

    Hello,

     


    @sarahr202 wrote:

     

    <skip> 

    Source is not sending any multicast stream yet

     

    <skip> 

    1) ( *,239.1.1.1.)  state is not created in multicast table eventhough SRX is able to discover using IGMP , receiver is interested in 239.1.1.1, that can be seen using show igmp but multicast table does not show entry.

     

    Now if contrast this with Cisco, Cisco creates ( *,239.1.1..1) state in multicast routing table once it receives IGMP report for 239.1.1.1 from Receiver.

     

    Is that normal behavior on SRX?

     

     

     

     

     

     

     


    This is corect SRX behavior as "show multicast route" actually displays multicast forwarding state cache entries. And without source sending, there is no any forwarding state.

    Also see book "JUNOS Enterprise Routing" 2nd edtion page 554.

     


    @sarahr202 wrote:

     

     

    3)  Do we have any command on SRX that can show me the number pf packet dropped because of RPF failure?

     

     

     

     

     


    Yes, please see "show multicast statistics" JUNOS CLI command reference

    https://www.juniper.net/documentation/en_US/junos/topics/reference/command-summary/show-multicast-statistics.html

     

    Mismatch
    Number of multicast packets that did not arrive on the correct upstream interface.

    HTH

    Thx
    Alex