I'm curious if anyone has had any performance issues with NFS over GigE?
We are bringing up a pretty standard VMware scenario: VMware servers
are connected to GigE with bonded pair and our Dell NF500 NAS is running
RAID10. Fast and easy. Only..
The NFS performance sucks. I need to get some firm numbers, but it looks
like we can't get NFS to perform better than if it were on a Fast
Ethernet network. That said, if we change over to mounting our VM
filesystem using CIFS we can scream at pretty much wire speeds. (By the
way, if using CentOS 5.0 and mount.cifs, upgrade to 5.1 because the 5.0
kernel will panic sometimes with a mounted CIFS in high usage.)
Here's out setup:
2 Dell 1850's CentOS 5.1 with Intel GigE cards (2 cards each, 2 ports
per card, 1 card = bonded pair = VMware Network, 1 card = 1 port = Admin
Network)
1 Dell NF500 running Windows Storage Server 2003 with 4 disk RAID10 and GigE
Regardless of whether we use bonding/LAG (Dell PowerConnect 5000+) or
just simple GigE over one port, our NFS sucks it. CIFS screams though
and pretty much saturates the connection.
Right now I've tested Linux <--> NAS. When I have time I'll try Linux to
Linux.
-- Dustin Puryear President and Sr. Consultant Puryear Information Technology, LLC 225-706-8414 x112 http://www.puryear-it.com Author, "Best Practices for Managing Linux and UNIX Servers" http://www.puryear-it.com/pubs/linux-unix-best-practices/ ___________________ Nolug mailing list nolug@nolug.orgReceived on 04/04/08
This archive was generated by hypermail 2.2.0 : 12/19/08 EST