Hi All,
Our setup is as follows:
roughly 20 Hosts (Mix between windows and ESX) each with dual broadcom 10Gb connections to the 4064f stack and 4 SAN boxes in a single group 2 x 6100 and 2 x 6110's connected to the same stack. They present LUN's to both windows and ESX and also to some windows boxes hosted on ESX. We have a test environment too that is connected to a 8132f stack and that stack is connected to the main prod stack with a single cable (No LAG).
We have noticed that when performing a large write operation the performance is abysmal with a peak throughput of around 125MB/s and have done quite a bit of troubleshooting to try and fix the issue includng but not limited to the following:
Setting network interface settings to enable tcp chimney and tcp offload
Disable LRO on ESX hosts
set global auto tuning to disabled and then back to normal (as it worsened performance
Ensured that the network interfaces and that the interfaces on the switches have jumbo frames enabled.
Updated the firmware on the 4064f's
When running the same process to local disk we get a throughput of around 1650MB/s and I am aware that we won't get this to SAN but we should be looking to get around 1000MB/s or at least max out the iops of the SAN but we do not.
Any help would be greatly appreciated
Thanks
Chris