I have built a simple lab to practice IPVPN.
22.214.171.124/24 -LAN --MX1-----------MX2(P)--------- ge0/0/0-MX4
126.96.36.199/24 -LAN--MX3------------- ---|
1)We have three PE(MX1,MX3,MX4) configured routing instance "IPVPN " . MX2 is P router All routers are running LDP.
2) MX1,MX2 ( PE) are announcing 188.8.131.52/24 to IBGP neighbor MX4.
3) MX4 receives two BGP updates for 184.108.40.206/24 from its IBGP neighbors : MX1 and MX3
Using BGP multipath on MX4, we want MX4 to load balance traffic for destination 220.127.116.11/24 over two LSPS, but forwarding table only shows one LSP path for 18.104.22.168/24, even though MX4 is configured with load balancing policy.
But If I configure "set routing-instances IPVPN routing-options multipath" then I see all expected LSPS are installed in forwarding table as shown at the end of this post.
This is a different behavior on MX as compare to EX switches in that MX requires "set routing-instances IPVPN routing-options multipath"
By default, ECMP (Equal Cost Multiple Path) is used to load balance traffic on EX/QFX switches, when there are multiple equal-cost paths available to the same destination. EX/QFX switches support per-packet (flow) load balancing in the global routing-instance (inet.0), as well as in the user-defined routing-instance (Virtual-Router).
However, when a per-packet load balancing policy is applied to a global routing-instance (inet.0), it is effective for all routing instances (global and user-defined routing instances).
Thanks and have a good day!!!
Additional info: MX4 config snippets
##Load balancing config##
set policy-options policy-statement LB then load-balance per-packetset routing-options forwarding-table export LB
set protocols bgp group INTERNAL type internalset protocols bgp group INTERNAL family inet-vpn anyset protocols bgp group INTERNAL multipathset protocols bgp group INTERNAL neighbor 22.214.171.124 local-address 126.96.36.199set protocols bgp group INTERNAL neighbor 188.8.131.52 local-address 184.108.40.206
## BGP ROUTES##
root@MX4> show route table IPVPN.inet 220.127.116.11/24 detail
IPVPN.inet.0: 4 destinations, 5 routes (4 active, 0 holddown, 0 hidden)
18.104.22.168/24 (2 entries, 1 announced)*BGP Preference: 170/-101Route Distinguisher: 22.214.171.124:11Next hop type: Indirect, Next hop index: 0Address: 0xc632d70Next-hop reference count: 9Source: 126.96.36.199Next hop type: Router, Next hop index: 615Next hop: 188.8.131.52 via ge-0/0/0.0, selected <-----------Label operation: Push 20, Push 16(top)Label TTL action: prop-ttl, prop-ttl(top)Load balance label: Label 20: None; Label 16: None;Label element ptr: 0xc632ac0Label parent element ptr: 0xc632700Label element references: 1Label element child references: 0Label element lsp id: 0Session Id: 0x140Protocol next hop: 184.108.40.206Label operation: Push 20Label TTL action: prop-ttlLoad balance label: Label 20: None;Indirect next hop: 0xb23e980 1048574 INH Session ID: 0x143State: <Secondary Active Int Ext ProtectionCand>Local AS: 100 Peer AS: 100Age: 38 Metric2: 1Validation State: unverifiedTask: BGP_220.127.116.11.1+179Announcement bits (1): 0-KRTAS path: ICommunities: target:2:2Import AcceptedVPN Label: 20Localpref: 100Router ID: 18.104.22.168Primary Routing Table bgp.l3vpn.0
BGP Preference: 170/-101Route Distinguisher: 22.214.171.124:33Next hop type: Indirect, Next hop index: 0Address: 0xc6330d0Next-hop reference count: 2Source: 126.96.36.199Next hop type: Router, Next hop index: 0Next hop: 188.8.131.52 via ge-0/0/0.0, selected <----------------Label operation: Push 16, Push 17(top)Label TTL action: prop-ttl, prop-ttl(top)Load balance label: Label 16: None; Label 17: None;Label element ptr: 0xc633000Label parent element ptr: 0xc6327c0Label element references: 1Label element child references: 0Label element lsp id: 0Session Id: 0x0Protocol next hop: 184.108.40.206Label operation: Push 16Label TTL action: prop-ttlLoad balance label: Label 16: None;Indirect next hop: 0xb23eb00 - INH Session ID: 0x0State: <Secondary NotBest Int Ext Changed ProtectionCand>Inactive reason: Not Best in its group - Router IDLocal AS: 100 Peer AS: 100Age: 34 Metric2: 1Validation State: unverifiedTask: BGP_220.127.116.11.3+179AS path: ICommunities: target:2:2Import AcceptedVPN Label: 16Localpref: 100Router ID: 18.104.22.168Primary Routing Table bgp.l3vpn.0
## FORWARDING TABLE##
root@MX4> show route forwarding-table | find "IPVPN.inet"Routing table: IPVPN.inetInternet:Enabled protocols: Bridging, All VLANs,Destination Type RtRef Next hop Type Index NhRef Netifdefault perm 0 rjct 555 10.0.0.0/32 perm 0 dscd 553 22.214.171.124/32 intf 0 126.96.36.199 locl 589 1188.8.131.52/24 user 0 indr 1048574 4184.108.40.206 Push 20, Push 16(top) 615 2 ge-0/0/0.0 <-------- ONLY SINGLE LSP
However, if I apply "set routing-instances IPVPN routing-options multipath" then all LSPS are installed:
root@MX4> show route forwarding-table | find "IPVPN.inet"
220.127.116.11/24 user 0 ulst 1048576 2indr 1048574 418.104.22.168 Push 20, Push 16(top) 615 2 ge-0/0/0.0indr 1048575 22.214.171.124 Push 16, Push 17(top) 616 2 ge-0/0/0.0
I believe this is the article that you read:
In my view that first statement can be misleading.
What it really means is that equal cost next-hops are available for load balancing by default. (The router doesn't just pick one and throws the other one away). For example, you don't need to go under OSPF or ISIS for the two or more equal cost next hops to be available for load balancing ; They get installed in the routing table by default. However, you still need to enable load balancing under the forwarding table for these next-hops to be installed in the forwarding table. The article shows you that the forwarding table only has one next hop before the policy is applied.
Now, BGP is different because it does not have a metric or cost. It has attributes and it goes through a decision process that involves comparing the values of those attributes (local-pref, AS-path length, origin, and so on) one by one until one and only one next-hop is selected.
Thus, BGP always need to have multipath configured to allow load balancing. Multipath makes the multiple next-hops (with matching attributes) available for load balancing. Without multipath the BGP decision process goes all the way down to comparing the RID or the peer IP address, and choses the route from the peer with the lowest. So, at the end of the BGP decision process only one next hop is available. With multipath there is no RID or IP address comparison and all the next-hops are available for load balancing.
The only kind of load balancing that BGP does without multipath is what is called per-prefix load balancing, which means that for example, if the router receives 100 different prefixes from the same 2 peers it selects the first peer for maybe 50 of the prefixes and the other peer for the other 50 prefixes. (it is not necessarily equal, because there is some hashing involved but you get the idea).
Also, for L3VPN load balancing the multipath command you added under routing-instances IPVPN routing-options is required.
Let me know if you have any further questions.
------------------------------Yasmin LaraJuniper AmbassadorJNCIE-SP, JNCIE-ENT, JNCIE-DC, JNCIE-SECJNCDS-DC, JNCIA-DevOps, JNCIP-CLOUD, CCNP-ENT------------------------------
------------------------------Yasmin LaraJuniper AmbassadorJNCIE-SP, JNCIE-ENT, JNCIE-DC, JNCIE-SECJNCDS-DC, JNCIA-DevOps, JNCIP-CLOUD, CCNP-ENTOriginal Message:Sent: 12-30-2020 16:01From: Unknown UserSubject: BGP Multipath and MX Load Balancing: Paths are not installed in Forwarding table.
hopefully, I'm not too far off topic, but somewhat related is, trying to accomplish L3VPN best path selection without stopping at the BGP best-path selection process at RID... this helps.... "set protocols ldp track-igp-metric"before that command, the inet.3 table has 0 (zero) for all prefixes BGP will use, so they are all equalonce you use that command, inet.0 metrics (igp) are copied over to inet.3 and now bgp will use those metric as best-path calculation to the next-hop PE for L3VPN.... this was very helpful when I was needing to load balance my traffic in my network for CGNat outbound across multiple MX960 CGNat public pools. Prior to this, all my subscribers were exiting out a single cgnat node and I was getting no nat load balancing at all. this was the first step in successfully spreading my outbound trafficwith subscriber-facing edge PE's scattered throughout my network, the geo-dispersion creates a sufficient load spread across my multiple cgnat inet-facing exit points.