Log in to ask questions, share your expertise, or stay connected to content you value. Don’t have a login? Learn how to become a member.
Hi, I'm experimenting with dual-stack on Juniper devices and would like some suggestions and tips from the Juniper Community. I'm running a test lab with MX104, SRX340 and SRX650. The backbone is MPLS-IPv4. The network is running OSFPv2, LDP, MPLS, BGP and VRF. We would like to implement IPv6 alongside IPv4. I've been researching the different techniques of dual stacking and found a few solutions. I would like a verification from you guys just to know if I understand it correctly:
Which technique do you guys recommend to implement, dual-stack or tunneling (such as 6PE)? The goal is to keep running these routing protocols alongside IPv6. Which technique has the least impact on resources as well as on network configuration.
I appreciate every reply of you guys.
DS-Lite is not a Juniper stuff or dual stack solution: it is a IPv4 to IPv6 tunneling migration solution to deal with end user customers/subscribers (and with CGNAT). This is probably not what your are looking for at all.
On our MPLS-IPv4 (ISIS, LDP, VRF) MX backbone, we use 6PE (in the master/global table) and 6VPE (in the VRFs). All the IPv6 routes (and their labels) are in the MP-BGP. No IPv6 for IGP or internal MPLS intercos. Worked well for years.
Hi Olivier thank you for your reply.
That does not sound like a lot of work, does it? We have our own public IPv6 ranges that we want to advertise to our clients. When it comes to implementing 6PE/6VPE, which security concerns do I need to think of? Do you have any tips that I can use?
Very simple, really.
set protocols mpls ipv6-tunneling
We also use these following lines for various reasons: have working VRFs, traceroutes, having internal pathes using IGP metrics even for MPLS/LDP, use explicit null everywhere (but not mandatory at all):
set protocols mpls traffic-engineering mpls-forwarding
set protocols mpls icmp-tunneling
set protocols ldp track-igp-metric
set protocols ldp explicit-null
For 6PE, while it seems strange, you must add family inet6 on internal MPLS interfaces (without any specific IP address), probably with setting a jumbo MTU - like you would do in IPv4 internally:
set interfaces <All-internal-MPLS-interfaces> unit <blah> family inet6 dad-disable (adds IPv6 family, and disable at the same time DAD – really useless feature here)
set interfaces <All-internal-MPLS-interfaces> unit <blah> family inet6 dad-disable
set interfaces <All-internal-MPLS-interfaces> mtu 9192 (actually, a supported max jumbo MTU compatible with all your gears, but you should already use something like that with MPLS, not specific to IPv6)
set interfaces <All-internal-MPLS-interfaces> mtu 9192
You will add 6PE and 6VPE address families in your I-MP-BGP group(s) – which make your IBGP sessions flap at commit (and notice that here, «explicit null» is mandatory for 6PE):
set protocols bgp group My_IBGP_Group family inet6 labeled-unicast explicit-null
set protocols bgp group My_IBGP_Group family inet6-vpn unicast
With ISIS you would have to make sure to disable ipv6-unicast for each internal interface (as you're using 6PE), but with OSPF (v2) there's no IPv6 so nothing to do.
Then add an IPv6 on your lo0, don't forget to configure an inbound firewall family inet6 filter on this lo0 to protect your router, same thing within the VRFs if they have a loopback interface configured.
------------------------------MOHAMAMD AYASHOriginal Message:Sent: 03-23-2023 06:35From: Olivier BenghoziSubject: Dual Stack implementation on Juniper Network Devices
------------------------------Olivier BenghoziOriginal Message:Sent: 03-22-2023 07:11From: MOHAMAMD AYASHSubject: Dual Stack implementation on Juniper Network Devices
About inet6 firewall filters, to match Layer4 protocol you'll have to use «match next-header» for loopback filters, and «match payload-protocol» for interface filters on MX (but probably next-header on SRX, not too sure). Annoying.
We also use this for various reasons (have working VRFs and traceroutes):
set interfaces <All-internal-MPLS-interfaces> unit <blah> family inet6 mtu 9158 (or your current IPv4 internal jumbo MTU, don't forget to remove 20 bytes)
set interfaces <All-internal-MPLS-interfaces> unit <blah> family inet6 mtu 9158
set interfaces <All-internal-MPLS-interfaces> unit <blah> family inet6 dad-disable (really useless feature here)
You will add 6PE and 6VPE address families in your I-MP-BGP group(s) – which make your IBGP sessions flap at commit:
Thank you Olivier again. I will test this out for sure! You've been amazing, glad I found someone that can help.
I agree and reiterate Olivier's comments.... I do 6VPE, which I understand is IPv6 over L3VPN. My Internet and CDN links are dual stacked.
I already had IPv4 L3VPN. Then sometime later, I needed to dual stack. I had to change the edge of my sp core. I only had to add IPv6 addressing on the pre-existing PE-CE connections
The underlying MP-BGP L3VPN work, was adding the "...family inet6-vpn unicast" to the pre-existing mp-bgp peering sessions. Be careful, they bounce. Hopefully you have redundant route reflector hub architecture. If so, do each neighbor one at a time, and wait for it to settle before doing the next one.
I do not understand the need for 6PE. I don't think I had to do 6PE for 6VPE to work.
Yeah security is something you will have to replicate everything you've ever learned and implemented in the IPv4 world... but now in the IPv6 world. The attack surface is now doubled with IPv6 entry to/from your network.
Below are some old jnsp mail list chats that I had with others that proved to be quite helpful to me when I was learning.
https://puck.nether.net/pipermail/juniper-nsp/2016-June/032973.htmlhttps://puck.nether.net/pipermail/juniper-nsp/2016-June/032974.htmlhttps://puck.nether.net/pipermail/juniper-nsp/2016-June/032975.htmlhttps://puck.nether.net/pipermail/juniper-nsp/2016-June/032985.htmlhttps://puck.nether.net/pipermail/juniper-nsp/2016-June/032986.htmlhttps://puck.nether.net/pipermail/juniper-nsp/2016-June/032987.htmlhttps://puck.nether.net/pipermail/juniper-nsp/2016-June/032990.htmlbtw, I think the junos ipv6 static route command is weird... so sharing hereset routing-instances one routing-options rib one.inet6.0 static route 2626:2626:0:53::/64 next-hop 2626:2626:0:5f::1
Thank you Aaron for your message. These are some nice tips. Thank you so much. I'll check the links you provided
I agree, here we have 6PE configured and active... but not using it actually :)
All our traffic is in VRFs (including DFZ/internet) so 6VPE is where our IPv6 traffic really is. Our global/master instance is really only an MPLS control plane.
But, 6PE works anyway (so we can use IPv6 everywhere).