View Only


This community is currently under full moderation, meaning  all posts will be reviewed before appearing in the community. Please expect a brief delay—there is no need to post multiple times. If your post is rejected, you'll receive an email outlining the reason(s). We've implemented full moderation to control spam. Thank you for your patience and participation.

  • 1.  BGP MTU Questions

    Posted 07-02-2015 09:28

    I have two questions:


    1- Path MTU discovery for BGP is disabled by default. When I enable it, it does not seem to detect any smaller MTU size in the path. In a three-router toplogy R1---R2---R3 where R3 and R1 are not directly-connected eBGP peers, if I set the MTU size to 1200 bytes on R2 and cleat the BGP session,  Junos does not discover that and still sets the MSS on R3 and R1 to the default MTU of that outbound interface 1460 (1500-40) bytes. Why is that?


    2- the Juniper docmentation clearly states that for two directly-conencted BGP routers with MTU mismatch, the adjacency will not  be established. However when I test Juniper to Cisco back to back with MTU mismatch, the BGP adjacency gets established with no issues. Why the behavior is different when one of the two ends is Cisco?


    Thanks in advance.


  • 2.  RE: BGP MTU Questions

    Posted 08-06-2015 03:11
      |   view attached

    Hi all


    I have the same issue ad atarsha. Please refer to the attached picture.


    The internal bgp session has negotiated TCP MSS 4096 although the physical layer 2 transport has mtu 1514.


    @MX3# run show bgp neighbor | match Option
    Options: <Preference LocalAddress Refresh>
    Options: <MtuDiscovery>


    @MX2# run show bgp neighbor |match Option
    Options: <Preference LocalAddress Refresh>
    Options: <MtuDiscovery>


    tcp4       0      0                             ESTABLISHED
        rttmin:       1000  mss:       4096


    tcp4 0 0 ESTABLISHED
    rttmin: 1000 mss: 4096


    Indeed the ICMP max end-to-end size in 1472 bytes


    @MX3# run ping source size 1472 do-not-fragment count 100 rapid
    PING ( 1472 data bytes
    --- ping statistics ---
    100 packets transmitted, 100 packets received, 0% packet loss
    round-trip min/avg/max/stddev = 0.968/1.058/2.290/0.168 ms



    @MX3# run ping source size 1473 do-not-fragment count 100 rapid
    PING ( 1473 data bytes
    --- ping statistics ---
    50 packets transmitted, 0 packets received, 100% packet loss



    Any suggestion?


    Thansk in advice


  • 3.  RE: BGP MTU Questions

    Posted 08-11-2015 10:07

    Hi guys


    sorry about my late post, I did not understand the point.


    So I have changed the lab topology by creating the same scenario as atarsha and the result was alway the same: the 2 BGP end-point do not discover the MTU bottleneck.


    Thank you