Quantcast
Channel: PowerConnect Forum - Recent Threads
Viewing all articles
Browse latest Browse all 2954

Dell m6220 aggregate to Cisco 6506

$
0
0

I am transitioning my ol' Dell 1955 balde chassis to my "new" Dell m1000e blade chassis at home and I've run into an issue. I'm posting it in networking as it relates more closely to that than the other available categories.

The setup: Dell m1000e with six m6220 standalone switches (not stacked) connecting to a Cisco 6506. Each m6220 spans four Gb blades in the 6506... m6220-a1, port 17, plugs into 6506, port g1/0/1; m6220-a1, port 18, plugs into 6506, port g2/0/1; m6220-a1, port 19, plugs into 6506, port g3/0/1; m6220-a1, port 20, plugs into 6506, port g4/0/1; m6220-a2, port 17, plugs into 6506, port g1/0/48; m6220-a2, port 18, plugs into 6506, port g2/0/48; m6220-a2, port 19, plugs into 6506, port g3/0/48; m6220-a2, port 20, plugs into 6506, port g4/0/48; the process repeats expectedly for the remaining m6220's.

The LAG groups are: po1 (12, 17-20) to po1 (g1/0/1-g4/0/1); po6 (a2, 17-20) to po6 (g1/0/2-g4/0/2)

Everything comes up fine. show ether-channel on the Cisco shows LACP is up and running fine, as does the same command on the Dell side.

I made all the LAG interfaces trunks and initially assumed Dell was like Cisco whereas no defined allowed VLAN's means all VLAN's allowed. I've since read that the default Dell behavior is to only allowed VLAN's specified. I've added the appropriate VLAN's on the Dell side.

Pings between m6220's work fine. Pings between the 6506 and the blade servers do not.

The blade servers are running ESXi 5.5U2 and I've added both A fabric interfaces to the management vSwitch. I first tried making the m6220 g1/0/1 an access port, but that didn't work. I tried making it a trunk port and setting a VLAN, but that didn't work either.

What information is needed to assist with this?

Thanks.


Viewing all articles
Browse latest Browse all 2954

Trending Articles