Re: [Nolug] Slow NFS on GigE

From: Charles Paul <charles.paul_at_gmail.com>
Date: Fri, 4 Apr 2008 15:06:34 -0500
Message-ID: <f720c0630804041306t527dbf19y7b89837f1e271ab2@mail.gmail.com>

Check to see if the MTU sizes are the same for both systems: Jumbo
(9000 bytes). For a gigE network you will want to have your frames as
large as possible... if these machines are only communicating with
each other, conside upping the MTU to 64KB even.

See your MTU values by running "netstat -i"

Check this out:
http://cdfcaf.fnal.gov/doc/cdfnote_5962/node16.html

"""
Table 4 summarizes the NFS read performance when the aforementioned
configuration effects are tested in addition to Jumbo Frames. The main
conclusions are:

    * Jumbo frames alone increases the throughput by $\sim$35%.
"""

On Fri, Apr 4, 2008 at 2:50 PM, Dustin Puryear <dustin@puryear-it.com> wrote:
> Really? What's getting me is CIFS is just fine over the same card. The
> problem just seems to be with NFS. That said, there may be some weird
> interaction with the driver and NFS. Who knows.
>
> --
>
> Dustin Puryear
> President and Sr. Consultant
> Puryear Information Technology, LLC
> 225-706-8414 x112
> http://www.puryear-it.com
>
> Author, "Best Practices for Managing Linux and UNIX Servers"
> http://www.puryear-it.com/pubs/linux-unix-best-practices/
>
>
> Chris Jones wrote:
>
> > Check the windows drivers. Find out the make of the gigabit chip, and get
> the latest drivers from that company's web site. (like broadcom) the
> gigabit drivers included with windows are often unusable, i've seen this
> same thing before.
> >
> > -----Original Message-----
> > From: "Dustin Puryear" <dustin@puryear-it.com>
> > To: general@brlug.net; nolug@nolug.org
> > Sent: 4/4/08 2:08 PM
> > Subject: [Nolug] Slow NFS on GigE
> >
> > I'm curious if anyone has had any performance issues with NFS over GigE?
> We are bringing up a pretty standard VMware scenario: VMware servers are
> connected to GigE with bonded pair and our Dell NF500 NAS is running RAID10.
> Fast and easy. Only..
> >
> > The NFS performance sucks. I need to get some firm numbers, but it looks
> like we can't get NFS to perform better than if it were on a Fast Ethernet
> network. That said, if we change over to mounting our VM filesystem using
> CIFS we can scream at pretty much wire speeds. (By the way, if using CentOS
> 5.0 and mount.cifs, upgrade to 5.1 because the 5.0 kernel will panic
> sometimes with a mounted CIFS in high usage.)
> >
> > Here's out setup:
> >
> > 2 Dell 1850's CentOS 5.1 with Intel GigE cards (2 cards each, 2 ports per
> card, 1 card = bonded pair = VMware Network, 1 card = 1 port = Admin
> Network)
> >
> > 1 Dell NF500 running Windows Storage Server 2003 with 4 disk RAID10 and
> GigE
> >
> > Regardless of whether we use bonding/LAG (Dell PowerConnect 5000+) or just
> simple GigE over one port, our NFS sucks it. CIFS screams though and pretty
> much saturates the connection.
> >
> > Right now I've tested Linux <--> NAS. When I have time I'll try Linux to
> Linux.
> >
> >
>
> ___________________
> Nolug mailing list
> nolug@nolug.org
>
___________________
Nolug mailing list
nolug@nolug.org
Received on 04/04/08

This archive was generated by hypermail 2.2.0 : 12/19/08 EST