The Junos release you using, should be stable for VXLAN.
From the routing-instance with BGP as the dynamic routing protocol to the core router.
With this setup is very easy to separate different traffic types. Example, you can have one instance for internet, one for DMZ and use a FW to allow traffic between the instances.
Is it also possible to do route leaking between instances, but I recommend to not do any leaking to inet.0
inet.0 is used for underlay and overlay setup of the VXLAN fabric.
------------------------------
Kalle Andersson
------------------------------
Original Message:
Sent: 08-16-2022 07:44
From: Unknown User
Subject: vxlan irb interface increases the traffic latency on QFX5110
Junos: 21.2R2-S1.5 flex
How does your vxlan L3 communicate to the networks outside vxlan ?
Our qfx5110 does not serve spine and leaf, just serves DCI. It needs to host both normal vlan and vxlan.
thanks !!
Original Message:
Sent: 08-16-2022 01:47
From: Kalle Andersson
Subject: vxlan irb interface increases the traffic latency on QFX5110
I have a very small VXLAN fabric in production, with two QFX5100 as leaf and two QFX5110 as spine (also acting as L3 GWs).
The L3 functionality works as it should at spine level.
What firmware version do you use?
------------------------------
Kalle Andersson
Original Message:
Sent: 08-15-2022 11:32
From: Unknown User
Subject: vxlan irb interface increases the traffic latency on QFX5110
thanks so much for your response.
We did come across Understanding How to Configure VXLANs and Layer 3 Logical Interfaces to Interoperate and tried to create a dummy vxlan vni on the pure L3 interface, it did not work.
A few years ago when I used vxlan on 51XX, I also came across "QFX5110 uses TriDent2+ PFE ASIC, which supporst VXLAN routing. This switch can be used as L3 gateway. However, it has a limitation where traffic cannot be routed from VXLAN to a VLAN (non-vxlan) and vice-versa. It also affects the general deployment of QFX5110 when it is used with L3VXLAN gateway." from Juniper configuration knobs and caveats .These lines are not included any more. I asked our previous local Juniper Sales Engineer, he said this limitation does not exist any more.
Recently our current Juniper Sales Engineer also mentioned "skip the the route leaking between EVPN-1 and inet.0". I am curious about this comment and asked any Juniper Docs for this and what theory behind this. I was told this is from his experience, not docs.
Original Message:
Sent: 08-15-2022 08:04
From: Kalle Andersson
Subject: vxlan irb interface increases the traffic latency on QFX5110
QFX5110 chipset is Trident2+ therefor the switch need some extra configuration when is used as L3 gateway for VXLAN.
set vlans vlan1140 vxlan vni 16770000(DUMMY VNI)
set protocols evpn vni-options vni 16770000 vrf-target target:1:16770000
set protocols evpn extended-vni-list 16770000
I should also skip the the route leaking between EVPN-1 and inet.0
Use EVPN-1 direct to the MX, I.E add the interface that is connected to MX to routing-instance EVPN-1
------------------------------
Kalle Andersson
Original Message:
Sent: 08-08-2022 15:08
From: Unknown User
Subject: vxlan irb interface increases the traffic latency on QFX5110
Anyone has any inputs related the routes leaking between VRFs and the Global? OK or Bad ?
Original Message:
Sent: 08-06-2022 00:21
From: Unknown User
Subject: vxlan irb interface increases the traffic latency on QFX5110
I have the following topology
Without IRB configured, the latency between Host-1 and Host-2 is below 1 ms via vlxan.
As QFX5110 does not support irb interface in the global routing instance (inet.0), I did the following way
set interfaces irb unit 1140 virtual-gateway-accept-dataset interfaces irb unit 1140 family inet address 10.67.254.30/27 virtual-gateway-address 10.67.254.1set interfaces lo0 unit 1 family inet address 10.68.191.235/32set routing-instances EVPN-1 instance-type vrfset routing-instances EVPN-1 interface irb.1140set routing-instances EVPN-1 interface lo0.1set routing-instances EVPN-1 route-distinguisher 10.68.191.235:1set routing-instances EVPN-1 vrf-target target:65534:65534set routing-instances EVPN-1 routing-options interface-routes rib-group inet to.inet0set routing-instances EVPN-1 routing-options static route 0.0.0.0/0 next-table inet.0set routing-options rib-groups to.inet0 import-rib EVPN-1.inet.0set routing-options rib-groups to.inet0 import-rib inet.0
With the above configuration, Host-3 can reach both host-1 and host-2, but the latency jumps to about 5 ms, even the latency between host-1 and host-2 also jumps to about 5 ms.
The worse thing is when I added additional irb interfaces, the latency spikes to 700 ms.
Anyone has clues and insights on this.
I do have a case for Juniper.
thanks so much !!