vMX

Expand all | Collapse all

FPC absent , vMX on openstack

Jump to Best Answer
  • 1.  FPC absent , vMX on openstack

    Posted 03-24-2018 05:11

    Hi,

     I am having issues to setup vMX on openstack based on redhat osp 10. I followed the juniper guide listed below.

    https://www.juniper.net/documentation/en_US/vmx17.3/information-products/pathway-pages/getting-started/vmx-gsg-openstack.html

    stack is created and i see both VMs up but RE shows fpc 0 absent. There is no issue of connectivity to the internal network and ping is working from RE to FPC. 

    I dont see anything listed under show chassis hardware related to fpc

     

    Anyone having simmilar issues? what is the solution 

     

    Regards,

    Muhammad Hasnain

     



  • 2.  RE: FPC absent , vMX on openstack

    Posted 03-24-2018 07:39

    Hi Muhammad,

     

    Which release of vMX are you using?

    Could you please confirm the memory,vCPU's allocated for both vCP & vFP Flavors?

    are you able to ping vFP from vCP & vice versa?

    vCP:

    root> ping 128.0.0.16 routing-instance __juniper_private1__

    vFP:

    root>ping 128.0.0.1

     

    -

    Vishruth



  • 3.  RE: FPC absent , vMX on openstack

    Posted 03-24-2018 11:38

    Hi,

     

    You can follow below thread.

     

    https://forums.juniper.net/t5/vMX/vMX-can-t-connect-fpc-slot-0-Absent/td-p/298032

     

    [KUDOS PLEASE! If you think I earned it!

    If this solution worked for you please flag my post as an "Accepted Solution" so others can benefit..]

    //Regards

    AD



  • 4.  RE: FPC absent , vMX on openstack

    Posted 03-24-2018 12:30

    Hi,

      I followed that thread already but unable to find any useful inofrmaiton related to my issue in that one. 

     

    I am using the 17.4R1.16 version.  Yes I can ping both VCP and VFPC from each other.  There are enough resources reserved for VCP ( 8GB and 8 cpu). There is no log message indicating any resource issues. 

    Is it mandatory to have licence installed before it can detect the FPC ?

     

    Regards,

    Muhammad Hasnain



  • 5.  RE: FPC absent , vMX on openstack

     
    Posted 03-25-2018 04:43

     

    what's nic type?

    Can you get "lspci" ?

     



  • 6.  RE: FPC absent , vMX on openstack

    Posted 03-25-2018 05:23

    Hi Muhammad,

     

    License is not mandatory for FPC online..

     

    Can you confirm what are the memeory/vCPU's allocated? Is it SRIOV or VIRTIO ?

     

    Did you tried with lite-mode ? if not please configure the chassis in litemode and post that restart vFP and see...

     

    Also please share message logs from vFP ...



  • 7.  RE: FPC absent , vMX on openstack

    Posted 03-25-2018 10:48

    I have tried lite mode but it didnt worked. I have allocated 12G ram and 8vcpu for this instance. 

    please find below the lspci output and /var/log/messages is attached from the VFPC node.

     

    root@host-10-1-19-30:~# lspci
    00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
    00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
    00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]
    00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01)
    00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
    00:02.0 VGA compatible controller: Cirrus Logic GD 5446
    00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
    00:04.0 Ethernet controller: Red Hat, Inc Virtio network device
    00:05.0 Ethernet controller: Red Hat, Inc Virtio network device
    00:06.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon

     

    Regards,

    Muhammad Hasnain



  • 8.  RE: FPC absent , vMX on openstack

    Posted 03-25-2018 10:58
    Being able to ping is a good first step. Can you check to make sure it is not an NYU issue?

    Try pinging with 1500B frames to see if the packers go through.

    Also please attach the output from the VFP log in Horizon. It can provide clues as to why the PFE code is not running.

    Thanks,
    -Paul


  • 9.  RE: FPC absent , vMX on openstack

    Posted 03-25-2018 11:14
      |   view attached

    Hi Paul,

      I am not able to ping beyond packet size 1472.  So it could be related to MTU issue ?

    I am sorry I couldnt understand term "NYU issue?" .  

     

    logs from the Horizon are attached for VFP. 

     

    Regards,

    Muhammad Hasnain

    Attachment(s)

    txt
    session_2.txt   26K 1 version


  • 10.  RE: FPC absent , vMX on openstack

    Posted 03-25-2018 11:34
    NYU was a mobile phone auto-correct for MTU. But, the MTU issue was resolved in a earlier release than you are using.

    The var/log/messages isn’t showing anything. Can you get the log from horizon? That has the serial console output.

    Thanks,


  • 11.  RE: FPC absent , vMX on openstack

    Posted 03-25-2018 11:51

    Hi,

     

    Please share /var/log on both vCP and vFP?

    //Regards

    AD



  • 12.  RE: FPC absent , vMX on openstack
    Best Answer

     
    Posted 03-25-2018 22:29

    Hi Folks,

    Just my 2 cents on this…

     

    In the past, with Openstack setups, I  saw issue derived from the MTU and overlay encapsulation. Symptoms were similar to what you are observing

     

    In your case [PING is working]

    Ping from VCP to VFP (128.0.0.16 from __juniper_private1__ instance)

     

    Try

    ssh fromVCP to VFP (128.0.0.16 with user root)

     

    If MTU in the int link is causing problems, SSH connection will have issues…something similar would happen for rpc which is the protocol used to download the process between the 2 VMs



  • 13.  RE: FPC absent , vMX on openstack

    Posted 03-26-2018 05:15

    Hi,

      It looks MTU issue as I am not able to SSH using the internal network. How can I make sure internal network is created with correct MTU and what that value is. Following is the internal network created and I see the MTU is set to 1496. However on all compute/controller nodes MTU is set to 9000. 

     

    Is this value set in some script while creating the internal network and how to change it?

     

    n@director openstack]$ neutron net-show d6ac8534-d402-481d-bbf9-8c7ec521fdca
    +---------------------------+-----------------------------------------+
    | Field | Value |
    +---------------------------+-----------------------------------------+
    | admin_state_up | True |
    | availability_zone_hints | |
    | availability_zones | nova |
    | created_at | 2018-03-26T11:14:25Z |
    | description | |
    | id | d6ac8534-d402-481d-bbf9-8c7ec521fdca |
    | ipv4_address_scope | |
    | ipv6_address_scope | |
    | mtu | 1496 |
    | name | Network_vMX_Internal-VMX-1-vfp0-to-vcp0 |
    | port_security_enabled | True |
    | project_id | 5ba7a8b6dceb4533a5b7ada712ee29ab |
    | provider:network_type | vlan |
    | provider:physical_network | datacentre |
    | provider:segmentation_id | 1001 |
    | qos_policy_id | |
    | revision_number | 5 |
    | router:external | False |
    | shared | False |
    | status | ACTIVE |
    | subnets | eab89d67-91b8-40fb-8da6-3b5d723b13ff |
    | tags | |
    | tenant_id | 5ba7a8b6dceb4533a5b7ada712ee29ab |
    | updated_at | 2018-03-26T11:14:26Z |
    +---------------------------+-----------------------------------------+



  • 14.  RE: FPC absent , vMX on openstack

    Posted 03-26-2018 05:34

    We can change this under neutron.conf file. we need to make sure neutron uses 9000 MTU...

     

    Post the changes, restart neutron.



  • 15.  RE: FPC absent , vMX on openstack

    Posted 04-10-2018 05:49

    It might be worth noting you need to SSH via the routing instance:

     

    ssh root@128.0.0.16 routing-instance __juniper_private1__

     

    I can make that connection but I have a very similar problem to you running 17.4R1.16.. I can't for the life of me get the vFPC up (as seen by the vCP). Just sitting there 'transitioning' doing nothing except for generating a bunch of riot coredump files with no content. 



  • 16.  RE: FPC absent , vMX on openstack

    Posted 04-10-2018 09:18

    Hi,

        Have you checked the MTU configuration as mentioned in this post. For me MTU was the issue and after changing that value in neutron config file on both controller and compute fixed the issue. 

    I can run the vMX even with 4G ram and 4vCPU.

     

    Regards,

    Muhammad Hasnain



  • 17.  RE: FPC absent , vMX on openstack

    Posted 03-25-2018 09:32

    Hi,

    Please try using lite-mode.

     

    #set chassis fpc 0 lite-mode

     

    //Regards

    AD