If anyone would like to do a presentation on clustering I have some
equipment (older proliants w/FC) I own that I could let them use. I don't
know how to do it but I would like to see and learn. It may take a little
tweaking to get everything working (FC) but it would be cool. Right now they
are just sitting in my living room. Just a thought, I would like to see how
to configure a Linux cluster, (Hence why I got the equipment) but have never
gotten around to looking it up or how to do it.
-----Original Message-----
From: Dustin Puryear [mailto:dpuryear@usa.net]
Sent: Wednesday, March 26, 2003 12:58 PM
To: nolug@joeykelly.net
Subject: Re: [Nolug] Quick question on clustering...
Joey is pretty on the mark here. I manage one web cluster now, and in the
past worked with a rather largish Linux Networx-based cluster. Typically,
you either create or purchase a front-end that will distribute the load to
your n web servers. (LVS is good for this.) Depending on the traffic you
may actually have multiple front-ends or at least a hot fail-over. For the
database you will usually have a single, massive database server or a
database server cluster of its own. So you have:
front-end <--switch--> web servers <--switch/fw--> database servers
This solves the database issue pretty much. If you are running a cluster of
database servers you sync. them however the vendor tells you to. :) Oracle
comes with support for this as does SQL Server. You can kind of do it with
MySQL, but it only works really well with a master/slave configuration. You
can have two masters be slaves to one another, but you would only do this
to have a hot fail-over ready.
You also have to worry about how your web servers are going to access the
files to serve up as content. If the files generally don't change too often
you can just rsync from a staging server to your web servers. You can also
use NFS. If given the option go with rsync if you can. NFS is great for
what it does, but it is a potential I/O bottleneck.
Well, to be honest avoid the network and dynamic content whenever you can
if you have a high-volume site. Go with local files and static content at
that point. (You can often convert your sometimes-updated PHP content to
static files by just doing an update of those files using some kind of
content generation script once an hour or something.) Also remember that
the web server will end up caching your files if possible so load up on the
RAM. I know that Netscape iPlanet will mmap() your content files if
possible.
This really is fun stuff.
Going off on a tangent alert..
At 12:32 PM 3/26/2003 +0000, you wrote:
>Ron can probably answer this better, but I'll take a stab at it.
>
>Many times, the web server is physically separated from the database, so
that
>one machine doesn't have to do the work of processing requests from the
user
>(what link did the user click on, etc.), rendering pages, and hosting the
>database application. If the database work is shuffled off to a separate
>database server, this lightens the load on the webserver. Obviously you
would
>want the webserver and database server links to be very fast, possibly
using
>gigabit ethernet. You can then easily load-balance several web servers, all
>pulling data from the database.
>
>What about a cluster of database servers? Perhaps Ron can jump in here, but
>if most of your database requests are selects or reads, then you can
probably
>replicate your data at intervals to several slave machines, while your
writes
>to the database would be sent to the master database, and these changes
would
>propagate to the slaves during scheduled replications.
>
>Ron was telling me the other day about SAN (storage-attached networks),
where
>the database servers were networked together not by ethernet, but by scsi
or
>some other data connection to the storage media. This layout is kind of
like
>a bicycle tire, where the storage media is at the hub, and the database
>servers are on the perimeter. They all share the same hard drive cluster
>(RAID) for their reads and writes, therefore eliminating the need for the
>staggered scheme detailed above. Replication in this instance would only be
>needed for backup purposes.
>
>Disclaimer: I haven't done any real database work, nor any high-load stuff,
>but am only echoing what I've picked up over the years here and there. Also
I
>wrote a research paper on distributed databases for a client once which
Scott
>might remember reading.
>
>--Joey
>
>
>Thou spake:
> >Craig Jackson <craig.jackson@wild.net> writes:
> >> But maybe the answer isn't so quick ;)
> >>
> >> How is it that web servers can be clustered? How are the databases kept
> >> in sync?
> >
> >Well that's two different issues isn't it? Or perhaps you need to
> >provide a more detailed question....
>
>--
>
>Joey Kelly
>< Minister of the Gospel | Computer Networking Consultant >
>http://joeykelly.net
>
>I'd rather crash a Ford than wreck a Chevy
>
>___________________
>Nolug mailing list
>nolug@nolug.org
--- Dustin Puryear <dustin@puryear-it.com> Puryear Information Technology Windows, UNIX, and IT Consulting http://www.puryear-it.com ___________________ Nolug mailing list nolug@nolug.org ___________________ Nolug mailing list nolug@nolug.orgReceived on 03/26/03
This archive was generated by hypermail 2.2.0 : 12/19/08 EST