We have two sides of an environment where we statically NAT ranges of private to public IPs and/or vice versa. On one side of this, we leverage a vSRX (on 15.1X49-D110.4), in which this traffic only lives in the global routing instance.
On the backup link, we have IPSec terminating into a RI (MNO) on an SRX240 (on 12.3X48-D65.1), which then passes traffic into the global RI.
With either path, if I generate traffic from the public to the private, NAT functions as expected. I see an approproate session and translation created within either SRX and away we go.
If I generate traffic from the private to public, I see the vSRX create a reverse NAT as expected, however, the SRX does not.
We route between these RIs on the SRX240 via lt interfaces, and the (truncated) NAT policy is as follows:
set security nat static rule-set MNO_NAT from zone INSIDE
set security nat static rule-set MNO_NAT rule 3_TEST2 match destination-address x.x.79.249/32set security nat static rule-set MNO_NAT rule 3_TEST2 then static-nat prefix 10.59.15.254/32set security nat static rule-set MNO_NAT rule 3_TEST2 then static-nat prefix routing-instance MNO
The rule set on the vSRX is identical save the RI statement
My assumption is that it's a configuration issue, however, we did seem to have this working properly when we leveraged rib-groups instead of lt interfaces. The how and why we changed is another conversation for another day. And I'm near the point to revert back to using rib-groups.
While trying to troubleshoot this, I'd created a source NAT rule as per this link to no avail; I'd seen the same behavior.
Taking some traces, I see the following for the failed translation:
And the good side form the vSRX
So I'm at a loss as to why this is occuring. I've attached a sanitized config that's relevant for this setup.
I'm certainly open to suggestions. I do have a JTAC case open, but it's not going to be followed up on until Monday.
So for inquiring minds, I was looking at the behavior of the SRX logically. Seems that's a bad thing to do.
My from zone rule statement was apparently pointing to the wrong zone. I was lookint at it from the perpsective of traffic traversing the zone. In my case, I was pinging from a server on the INSIDE towards the MNO zone/instance. As that worked, I'd assumed it should work in the reverse, but I needed to have that statement point to the MNO zone, and it works.
Because of the order of operation, I wouldn't have thought that would have covered it as traffic from the INSIDE doesn't have any NAT rules associated from the INSIDE, but apparently that's not the reality of it.