Re: [Nolug] Slow NFS on GigE

From: Dustin Puryear <dustin_at_puryear-it.com>
Date: Fri, 04 Apr 2008 15:31:12 -0500
Message-ID: <47F69010.5090409@puryear-it.com>

Yeah, tried that already. Again, CIFS between the same two servers
saturates the GigE. NFS is just sad between the same two servers. I've
tweaked the network parameters already a good bit.

Very odd stuff.

--
Dustin Puryear
President and Sr. Consultant
Puryear Information Technology, LLC
225-706-8414 x112
http://www.puryear-it.com
Author, "Best Practices for Managing Linux and UNIX Servers"
   http://www.puryear-it.com/pubs/linux-unix-best-practices/
Charles Paul wrote:
> Check to see if the MTU sizes are the same for both systems: Jumbo
> (9000 bytes).  For a gigE network you will want to have your frames as
> large as possible... if these machines are only communicating with
> each other, conside upping the MTU to 64KB even.
> 
> See your MTU values by running "netstat -i"
> 
> 
> Check this out:
> http://cdfcaf.fnal.gov/doc/cdfnote_5962/node16.html
> 
> """
> Table 4 summarizes the NFS read performance when the aforementioned
> configuration effects are tested in addition to Jumbo Frames. The main
> conclusions are:
> 
>     * Jumbo frames alone increases the throughput by $\sim$35%.
> """
> 
> 
> 
> On Fri, Apr 4, 2008 at 2:50 PM, Dustin Puryear <dustin@puryear-it.com> wrote:
>> Really? What's getting me is CIFS is just fine over the same card. The
>> problem just seems to be with NFS. That said, there may be some weird
>> interaction with the driver and NFS. Who knows.
>>
>>  --
>>
>>  Dustin Puryear
>>  President and Sr. Consultant
>>  Puryear Information Technology, LLC
>>  225-706-8414 x112
>>  http://www.puryear-it.com
>>
>>  Author, "Best Practices for Managing Linux and UNIX Servers"
>>   http://www.puryear-it.com/pubs/linux-unix-best-practices/
>>
>>
>>  Chris Jones wrote:
>>
>>> Check the windows drivers.  Find out the make of the gigabit chip, and get
>> the latest drivers from that company's web site.  (like broadcom)  the
>> gigabit drivers included with windows are often unusable, i've seen this
>> same thing before.
>>> -----Original Message-----
>>> From: "Dustin Puryear" <dustin@puryear-it.com>
>>> To: general@brlug.net; nolug@nolug.org
>>> Sent: 4/4/08 2:08 PM
>>> Subject: [Nolug] Slow NFS on GigE
>>>
>>> I'm curious if anyone has had any performance issues with NFS over GigE?
>> We are bringing up a pretty standard VMware scenario: VMware servers are
>> connected to GigE with bonded pair and our Dell NF500 NAS is running RAID10.
>> Fast and easy. Only..
>>> The NFS performance sucks. I need to get some firm numbers, but it looks
>> like we can't get NFS to perform better than if it were on a Fast Ethernet
>> network. That said, if we change over to mounting our VM filesystem using
>> CIFS we can scream at pretty much wire speeds. (By the way, if using CentOS
>> 5.0 and mount.cifs, upgrade to 5.1 because the 5.0 kernel will panic
>> sometimes with a mounted CIFS in high usage.)
>>> Here's out setup:
>>>
>>> 2 Dell 1850's CentOS 5.1 with Intel GigE cards (2 cards each, 2 ports per
>> card, 1 card = bonded pair = VMware Network, 1 card = 1 port = Admin
>> Network)
>>> 1 Dell NF500 running Windows Storage Server 2003 with 4 disk RAID10 and
>> GigE
>>> Regardless of whether we use bonding/LAG (Dell PowerConnect 5000+) or just
>> simple GigE over one port, our NFS sucks it. CIFS screams though and pretty
>> much saturates the connection.
>>> Right now I've tested Linux <--> NAS. When I have time I'll try Linux to
>> Linux.
>>>
>>  ___________________
>>  Nolug mailing list
>>  nolug@nolug.org
>>
> ___________________
> Nolug mailing list
> nolug@nolug.org
___________________
Nolug mailing list
nolug@nolug.org
Received on 04/04/08

This archive was generated by hypermail 2.2.0 : 12/19/08 EST