Data Center

 View Only
last person joined: yesterday 

Ask questions and share experiences about Data Center Architecture and approaches.

qfx10K mcl-peers concurrently supporting mc-ae and VSTP question.

  • 1.  qfx10K mcl-peers concurrently supporting mc-ae and VSTP question.

    Posted 20 days ago

    I posted this last week on the Switching forum and had 2nd thoughts this may be a more appropriate one. 

    The gist is we built 2x qfx10008 as mcl-peers and plan to "interconnect" them to our last pair of legacy ex8216s.
    There  are detail below; but  the ex8216s are functionally the L2/L3 boundry and functionally VSTP / VRRP at the distribution / core layer of our legacy data center. I have VCE converged cabinets using  nexus 9ks with r-pvst+ to vstp.  This is working DC  so I need to migrate and cut-over this devices as is.

    Given how the ICCP/ICP-PL  behave and what looks like VSTP BPDUs to not passing of converging between the ex8216 and and qfx10ks on a mc-ae,
    am I going to need "another aeX -trunk between the 2x 10Ks  to support the vlans configured for VSTP?
    I dont see any NCE or details  in the mcl config guide.

    I'm hoping to leverage the expertise I've seen.  I'm taking this tactic because JTAC is not answering my questions, labeling it testing and non production oriented.
    We've been a Juniper shop for > 12yrs,  from v10.x and onward with experience and deployment of many platforms in ex23xx, ex42xx, ex34xx, ex43xx, ex46xx, ex9214, ex9253,     qfx35xx, qfx510xx, qfx5120, qfx10008,   mx5, mx104,    srxx300 srx5600 , etc , using virtual-chassis, vcf[now gone] and clustering.
    We 'e deployed a number of mc-lag peers , ex4600  and ex9214, at the distribution/spine layer [L2/L3] of small sites and Campus, and have a feel of most of the niggles, but who knows everthing  ;-)
    I know I dont. 

    Our legacy datacenter has 2x  ex8216s at the dist/spine layer in a traditional L2/L3 config; RSTP disabled, but VSTP used for approx 55 of 85 vlans.
    - Several TORs access switches are deployed with uplinks using RTG;  when cutover to the planned qfx10K , they would be reconfigured using only 1 aeX  to an mc-aeX on the qfx
    - Other TORs supports connect in an H pattern hosting VLANs configured with vstp , rooted to say ex8216[1]
    - We have a couple  EMC/VCE converged compute solutions hosting 95% of our VMware stack, which connect using cisco nx9K, and r-pvstp+ / vlan
    The EOL ex8216s have to go... we purchased, built,  2x qfx10008 and deployed them as  mc-lag peers.
    This is a Bown-field Implementation- cut-over, not Green field so it tricky. 

    My integration approach was to use 1 main mc-ae from the qfx10[s]  -> to the primary ex8216[1] and extend L2 vlans
    Test VSTP integration - rooting, etc. 

    It does not appear any vlan configured for VSTP on either end of the above mc-ae is converging,  or logging rec'vd BPDU.

    Neither "The  Juniper Multichassis Link Aggregation User Guide"  nor any "nce" paper really talk about or show mc-lag peers Also supporting VSTP.

    Given how the ICCP/ICP-PL  behave, am I going to need "another aeX -trunk between the 2x 10Ks  to support the vlans configured for VSTP to alow them to behave ?
    This is an operatonal DC, so I cant just re-configure our VCE Converged solutions even though they have A nd B nexus 9ks, etc 

    Options, thoughts, ? anything is appreciated, 
    Michael B