Hi, I've recently experienced strange problem.
This is vMX deployment on ESXi.
When vMX VMs (vcp, vfpc) are deployed, two interfaces (vnics) are created by default.
The first one is for fxp0 and the second one is for internal re-pfe communication.
If I add one more vnic, this will be used as ge-0/0/0.
As I add more vnics, the interface ordering is preserved as listed below.
vnic1 - fxp0
vnic2 - internal link
vnic3 - ge-0/0/0
vnic4 - ge-0/0/1
vnic5 - ge-0/0/2
vnic6 - ge-0/0/3
If I add one more vnic, then interfaces are no longer in order.
vnic4 - ge-0/0/2
vnic5 - ge-0/0/3
vnic6 - ge-0/0/4
vnic7 - ge-0/0/1
Anybody experienced similar issue?
Thanks in advance.
The mismatching was caused by the way how ESXi maps PCI slot numbers to guest-visible PCI bus topology (see https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2047927 for details).
It's not a vMX specific issue, if you bring up a Linux VM with 10 network interface, you will see the similar behaviour.
I've experienced the same thing with vMX on Workstation 12. It's exactly the same order that you have listed when I add 5 more vnics. I've just lived with it or modified the config file to redo the vnic to interface mapping.
Ran into the same issue. This problem happens when adding vmxnet interfaces for your revenue ports. It is a problem with Vmware pciBridge load balancing mechanism and it is not present with e1000 type. Yet in my experience e1000 driver is poorly performs (PFE errors, interfaces flapping, etc) when adding max number of interfaces.
Found the solution in the SRX tech library doc:
Keep br-int and ext e1000 for both RE and PFE or they will not sync and fxp0 will be inaccessable.
Then follow the guide for adding vmxnet ports. Worked like a charm.
I just took an email on this issue, and the vSRX link has now been removed from j-net. I did I quick blog post on it - http://matt.dinham.net/interface-ordering-vmware/