I am having issues to setup vMX on openstack based on redhat osp 10. I followed the juniper guide listed below.
stack is created and i see both VMs up but RE shows fpc 0 absent. There is no issue of connectivity to the internal network and ping is working from RE to FPC.
I dont see anything listed under show chassis hardware related to fpc
Anyone having simmilar issues? what is the solution
Which release of vMX are you using?
Could you please confirm the memory,vCPU's allocated for both vCP & vFP Flavors?
are you able to ping vFP from vCP & vice versa?
root> ping 184.108.40.206 routing-instance __juniper_private1__
You can follow below thread.
[KUDOS PLEASE! If you think I earned it!
If this solution worked for you please flag my post as an "Accepted Solution" so others can benefit..]
I followed that thread already but unable to find any useful inofrmaiton related to my issue in that one.
I am using the 17.4R1.16 version. Yes I can ping both VCP and VFPC from each other. There are enough resources reserved for VCP ( 8GB and 8 cpu). There is no log message indicating any resource issues.
Is it mandatory to have licence installed before it can detect the FPC ?
what's nic type?
Can you get "lspci" ?
License is not mandatory for FPC online..
Can you confirm what are the memeory/vCPU's allocated? Is it SRIOV or VIRTIO ?
Did you tried with lite-mode ? if not please configure the chassis in litemode and post that restart vFP and see...
Also please share message logs from vFP ...
I have tried lite mode but it didnt worked. I have allocated 12G ram and 8vcpu for this instance.
please find below the lspci output and /var/log/messages is attached from the VFPC node.
root@host-10-1-19-30:~# lspci00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)00:02.0 VGA compatible controller: Cirrus Logic GD 544600:03.0 Ethernet controller: Red Hat, Inc Virtio network device00:04.0 Ethernet controller: Red Hat, Inc Virtio network device00:05.0 Ethernet controller: Red Hat, Inc Virtio network device00:06.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon
I am not able to ping beyond packet size 1472. So it could be related to MTU issue ?
I am sorry I couldnt understand term "NYU issue?" .
logs from the Horizon are attached for VFP.
Please share /var/log on both vCP and vFP?
Just my 2 cents on this…
In the past, with Openstack setups, I saw issue derived from the MTU and overlay encapsulation. Symptoms were similar to what you are observing
In your case [PING is working]
Ping from VCP to VFP (220.127.116.11 from __juniper_private1__ instance)
ssh fromVCP to VFP (18.104.22.168 with user root)
If MTU in the int link is causing problems, SSH connection will have issues…something similar would happen for rpc which is the protocol used to download the process between the 2 VMs
It looks MTU issue as I am not able to SSH using the internal network. How can I make sure internal network is created with correct MTU and what that value is. Following is the internal network created and I see the MTU is set to 1496. However on all compute/controller nodes MTU is set to 9000.
Is this value set in some script while creating the internal network and how to change it?
n@director openstack]$ neutron net-show d6ac8534-d402-481d-bbf9-8c7ec521fdca+---------------------------+-----------------------------------------+| Field | Value |+---------------------------+-----------------------------------------+| admin_state_up | True || availability_zone_hints | || availability_zones | nova || created_at | 2018-03-26T11:14:25Z || description | || id | d6ac8534-d402-481d-bbf9-8c7ec521fdca || ipv4_address_scope | || ipv6_address_scope | || mtu | 1496 || name | Network_vMX_Internal-VMX-1-vfp0-to-vcp0 || port_security_enabled | True || project_id | 5ba7a8b6dceb4533a5b7ada712ee29ab || provider:network_type | vlan || provider:physical_network | datacentre || provider:segmentation_id | 1001 || qos_policy_id | || revision_number | 5 || router:external | False || shared | False || status | ACTIVE || subnets | eab89d67-91b8-40fb-8da6-3b5d723b13ff || tags | || tenant_id | 5ba7a8b6dceb4533a5b7ada712ee29ab || updated_at | 2018-03-26T11:14:26Z |+---------------------------+-----------------------------------------+
We can change this under neutron.conf file. we need to make sure neutron uses 9000 MTU...
Post the changes, restart neutron.
It might be worth noting you need to SSH via the routing instance:
ssh email@example.com routing-instance __juniper_private1__
I can make that connection but I have a very similar problem to you running 17.4R1.16.. I can't for the life of me get the vFPC up (as seen by the vCP). Just sitting there 'transitioning' doing nothing except for generating a bunch of riot coredump files with no content.
Have you checked the MTU configuration as mentioned in this post. For me MTU was the issue and after changing that value in neutron config file on both controller and compute fixed the issue.
I can run the vMX even with 4G ram and 4vCPU.
Please try using lite-mode.
#set chassis fpc 0 lite-mode