Junos OS

 View Only
last person joined: 2 days ago 

Ask questions and share experiences about Junos OS.
  • 1.  JNCIP-SP Interdomain multicast configuration

    Posted 12-24-2018 09:09
      |   view attached

    Hello,

    Merry Xmas & happy new year.

      I would like to setup interdomain multicast configuration using msdp. I build the below network diagram in vMX box.

    Interdomain multicastInterdomain multicast

    Multicast communication is working inside each multicast domain. The multicast sources belonging to a given domain

    could reach multicast clients inside the domain.  All the hosts in both domain are member of muticast group 224.1.1.20 .

    For example from SRC-A , vHostA,vHostB & vHostC replied  to ping 224.1.1.20

    oot@ISP-A:SRC-A> ping 224.1.1.20 ttl 10 count 10 bypass-routing
    PING 224.1.1.20 (224.1.1.20): 56 data bytes
    64 bytes from 10.0.20.1: icmp_seq=0 ttl=62 time=4.160 ms
    64 bytes from 10.0.45.1: icmp_seq=0 ttl=62 time=21.704 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=0 ttl=62 time=24.907 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=1 ttl=62 time=3.526 ms
    64 bytes from 10.0.45.1: icmp_seq=1 ttl=62 time=7.997 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=1 ttl=62 time=12.813 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=2 ttl=62 time=6.211 ms
    64 bytes from 10.0.45.1: icmp_seq=2 ttl=62 time=11.032 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=2 ttl=62 time=16.082 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=3 ttl=62 time=3.379 ms
    64 bytes from 10.0.45.1: icmp_seq=3 ttl=62 time=5.983 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=3 ttl=62 time=8.600 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=4 ttl=62 time=4.514 ms
    64 bytes from 10.0.45.1: icmp_seq=4 ttl=62 time=6.899 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=4 ttl=62 time=9.901 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=5 ttl=62 time=6.186 ms
    64 bytes from 10.0.45.1: icmp_seq=5 ttl=62 time=11.179 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=5 ttl=62 time=16.331 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=6 ttl=62 time=3.307 ms
    64 bytes from 10.0.45.1: icmp_seq=6 ttl=62 time=6.566 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=6 ttl=62 time=11.269 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=7 ttl=62 time=13.833 ms
    64 bytes from 10.0.45.1: icmp_seq=7 ttl=62 time=28.840 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=7 ttl=62 time=48.427 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=8 ttl=62 time=7.736 ms
    64 bytes from 10.0.45.1: icmp_seq=8 ttl=62 time=16.096 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=8 ttl=62 time=22.693 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=9 ttl=62 time=12.113 ms

    --- 224.1.1.20 ping statistics ---
    10 packets transmitted, 10 packets received, +18 duplicates, 0% packet loss
    round-trip min/avg/max/stddev = 3.307/12.582/48.427/9.605 ms

    root@ISP-A:SRC-A> show

    the MSDP session is well established between vR2 and vR4 as well the eBGP session.

    My concern is that SRC-A isn't propagate in multicast domain B

    root@ISP-A:vR2>
    root@ISP-A:vR2> show msdp source-active
    Global active source limit exceeded: 0
    Global active source limit maximum: 25000
    Global active source limit threshold: 24000
    Global active source limit log-warning: 100
    Global active source limit log interval: 0

    Group address Source address Peer address Originator Flags
    224.1.1.20 10.0.30.1 local 192.168.1.2 Accept

    root@ISP-A:vR2> set cli logical-system vR4
    Logical system: vR4

    root@ISP-A:vR4> show msdp source-active
    Global active source limit exceeded: 0
    Global active source limit maximum: 25000
    Global active source limit threshold: 24000
    Global active source limit log-warning: 100
    Global active source limit log interval: 0

    Group address Source address Peer address Originator Flags
    224.1.1.20 10.0.30.1 172.17.20.1 192.168.1.2 Reject

    root@ISP-A:vR4>

    Find in attached file the whole configuration. Thanks for your support.

    Attachment(s)



  • 2.  RE: JNCIP-SP Interdomain multicast configuration

    Posted 12-24-2018 21:49

    Hello,

    PIM is not enabled between vR2 and vR4.

    Thanks

    Alex



  • 3.  RE: JNCIP-SP Interdomain multicast configuration

    Posted 12-25-2018 00:52

    Thanks Alex for your remark.

    I update the configuration. PIM is now enable between vR2 and vR4 as shown below 

    root@ISP-A:vR4> show pim interfaces

    Stat = Status, V = Version, NbrCnt = Neighbor Count,
    S = Sparse, D = Dense, B = Bidirectional,
    DR = Designated Router, P2P = Point-to-point link,
    Active = Bidirectional is active, NotCap = Not Bidirectional Capable

    Name Stat Mode IP V State NbrCnt JoinCnt(sg/*g) DR address
    lt-0/0/0.45 Up S 4 2 NotDR,NotCap 1 0/0 10.0.10.6
    lt-0/0/0.46 Up S 4 2 DR,NotCap 1 0/0 10.0.10.2
    lt-0/0/0.92 Up S 4 2 DR,NotCap 1 0/0 172.17.20.2
    pd-0/0/0.55297 Up S 4 2 P2P,NotCap 0 0/0

    root@ISP-A:vR4> set cli logical-system vR2
    Logical system: vR2

    root@ISP-A:vR2> show pim interfaces

    Stat = Status, V = Version, NbrCnt = Neighbor Count,
    S = Sparse, D = Dense, B = Bidirectional,
    DR = Designated Router, P2P = Point-to-point link,
    Active = Bidirectional is active, NotCap = Not Bidirectional Capable

    Name Stat Mode IP V State NbrCnt JoinCnt(sg/*g) DR address
    lt-0/0/0.21 Up S 4 2 DR,NotCap 1 2/0 10.0.5.2
    lt-0/0/0.23 Up S 4 2 NotDR,NotCap 1 0/0 10.0.5.10
    lt-0/0/0.29 Up S 4 2 NotDR,NotCap 1 0/0 172.17.20.2
    pd-0/0/0.51202 Up S 4 2 P2P,NotCap 0 0/0

    But unfortunately I still have the flag reject for  SRC-A on  vR4

    root@ISP-A:vR4> show msdp source-active
    Global active source limit exceeded: 0
    Global active source limit maximum: 25000
    Global active source limit threshold: 24000
    Global active source limit log-warning: 100
    Global active source limit log interval: 0

    Group address Source address Peer address Originator Flags
    224.1.1.20 10.0.30.1 172.17.20.1 192.168.1.2 Reject

     



  • 4.  RE: JNCIP-SP Interdomain multicast configuration
    Best Answer

    Posted 12-25-2018 01:27

    Hello,

    You also need to announce Your mcast src ip 10.0.30.1 from vR2 to vR4 via eBGP.

    HTH

    Thx

    Alex



  • 5.  RE: JNCIP-SP Interdomain multicast configuration

    Posted 12-25-2018 04:08

    Hello Alex,

    Fantastic !!!  I update the configuration as you suggested. I announced mcast source subnet via eBGP.

    On vR2 :

    root@ISP-A:vR2> show configuration policy-options
    policy-statement SENT_ISIS {
    term sent_isis {
    from {
    protocol isis;
    route-filter 10.0.30.0/30 exact; (SRC-A subnet)
    route-filter 10.0.35.0/30 exact; (SRC-B subnet)
    }
    then accept;
    }
    }

    On vR4

    root@ISP-A:vR4> show configuration policy-options
    policy-statement SENT_ISIS {
    term sent_isis {
    from {
    protocol isis;
    route-filter 10.10.30.0/30 exact; (SRC-C subnet)
    }
    then accept;
    }
    }

     

    The ping is working now 

    root@ISP-A:vR4> set cli logical-system SRC-A
    Logical system: SRC-A

    root@ISP-A:SRC-A> ping 224.1.1.20 ttl 10 count 10 bypass-routing
    PING 224.1.1.20 (224.1.1.20): 56 data bytes
    64 bytes from 10.0.20.1: icmp_seq=0 ttl=62 time=2.918 ms
    64 bytes from 10.0.45.1: icmp_seq=0 ttl=62 time=30.746 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=0 ttl=62 time=33.173 ms (DUP!)
    64 bytes from 10.10.20.1: icmp_seq=0 ttl=60 time=38.139 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=1 ttl=62 time=2.970 ms
    64 bytes from 10.0.45.1: icmp_seq=1 ttl=62 time=7.030 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=1 ttl=62 time=11.610 ms (DUP!)
    64 bytes from 10.10.20.1: icmp_seq=1 ttl=60 time=16.795 ms (DUP!)
    64 bytes from 10.0.45.1: icmp_seq=2 ttl=62 time=19.463 ms
    64 bytes from 10.0.20.1: icmp_seq=2 ttl=62 time=30.803 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=2 ttl=62 time=36.215 ms (DUP!)
    64 bytes from 10.10.20.1: icmp_seq=2 ttl=60 time=41.568 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=3 ttl=62 time=18.309 ms
    64 bytes from 10.0.45.1: icmp_seq=3 ttl=62 time=22.075 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=3 ttl=62 time=25.920 ms (DUP!)
    64 bytes from 10.10.20.1: icmp_seq=3 ttl=60 time=32.308 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=4 ttl=62 time=5.062 ms
    64 bytes from 10.0.45.1: icmp_seq=4 ttl=62 time=7.431 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=4 ttl=62 time=10.227 ms (DUP!)
    64 bytes from 10.10.20.1: icmp_seq=4 ttl=60 time=13.240 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=5 ttl=62 time=4.199 ms
    64 bytes from 10.0.45.1: icmp_seq=5 ttl=62 time=7.806 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=5 ttl=62 time=11.538 ms (DUP!)
    64 bytes from 10.10.20.1: icmp_seq=5 ttl=60 time=16.315 ms (DUP!)
    64 bytes from 10.0.45.1: icmp_seq=6 ttl=62 time=14.895 ms
    64 bytes from 10.0.20.1: icmp_seq=6 ttl=62 time=23.106 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=6 ttl=62 time=31.847 ms (DUP!)
    64 bytes from 10.10.20.1: icmp_seq=6 ttl=60 time=40.991 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=7 ttl=62 time=13.915 ms
    64 bytes from 10.0.45.1: icmp_seq=7 ttl=62 time=28.423 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=7 ttl=62 time=34.914 ms (DUP!)
    64 bytes from 10.10.20.1: icmp_seq=7 ttl=60 time=53.435 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=8 ttl=62 time=3.257 ms
    64 bytes from 10.0.45.1: icmp_seq=8 ttl=62 time=6.940 ms (DUP!)
    64 bytes from 10.0.25.1: icmp_seq=8 ttl=62 time=10.511 ms (DUP!)
    64 bytes from 10.10.20.1: icmp_seq=8 ttl=60 time=16.102 ms (DUP!)
    64 bytes from 10.0.20.1: icmp_seq=9 ttl=62 time=13.904 ms

    --- 224.1.1.20 ping statistics ---
    10 packets transmitted, 10 packets received, +27 duplicates, 0% packet loss
    round-trip min/avg/max/stddev = 2.918/19.949/53.435/12.894 ms

    vHostD (10.10.20.1)  belong to mcast domain B is replying.

    Thanks to much Alex!