Re: [Nolug] Quick question on clustering...

From: Dustin Puryear <dpuryear_at_usa.net>
Date: Wed, 26 Mar 2003 14:03:33 -0600
Message-Id: <5.1.0.14.0.20030326135449.05c39a20@pop.netaddress.com>

Typically, the term "cluster" is used when you are at least ensuring
high-availability of a service between two or more servers. At least in the
context we are using it. Also, there is no strict anything in this
business, as we all know. As I suggested a big site may use what appears to
be dynamic content when in fact they are serving up static files. They will
just generate every possible path through their web site and dump it to
disk. This can be much faster than having huge hits against a database.

Joey also had a good idea where a read-mostly site may have satellite
database servers that are close to the web server for quick access. Writes
are pushed only to the master. This is another way to increase speed.

SANS may or may not be used.

At 01:31 PM 3/26/2003 -0600, you wrote:

>Actually, I need to get my terminolgy right and I'm not sure how to do
>that. Let's see, openmosaic is a clustering kernel patch used for
>coordinating linux servers for clustering. Is openmosaic what one would
>use to cluster webservers delivering static content? In the case of
>dynamic content and database connection, the large backend database as
>Joey, Ron, and Dustin spoke of would come into play. That would require
>strictly ethernet. If one uses a SANS, then fiber channel is used and in
>the above scenario, several databases are attached to the SANS device
>and possibly even the web servers themselves. Correct me if I'm wrong in
>my reading.
>
>Sound remarkably fault tolerant. And possibly expensive, esp for the
>SANS.
>
>On Wed, 2003-03-26 at 12:57, Scott Harney wrote:
> > Joey Kelly <joey@joeykelly.net> writes:
> >
> > > Many times, the web server is physically separated from the database,
> > so that
> > > one machine doesn't have to do the work of processing requests from
> > the user
> > > (what link did the user click on, etc.), rendering pages, and hosting
> > the
> > > database application. If the database work is shuffled off to a
> > separate
> >
> > Right. And this is generally a good idea to do in a production
> > environment. Of course this is really network load-balancing rather
> > than "clustering" per-se. But based on the context, I figured that was
> > what Craig meant by the question.
> >
> > > What about a cluster of database servers? Perhaps Ron can jump in
> > here, but
> > > if most of your database requests are selects or reads, then you can
> > probably
> > > replicate your data at intervals to several slave machines, while your
> > writes
> > > to the database would be sent to the master database, and these
> > changes would
> > > propagate to the slaves during scheduled replications.
> >
> > all the major databases can do this -- mysql and postgres included.
> > I've implemented replication on mysql. If your application is written
> > correctly, you should be able to fail over gracefully. DNS is often
> > used in this scenario to provide a "virtual hostname". There's more
> > than one way to do it though...
> >
> > > Ron was telling me the other day about SAN (storage-attached
> > networks), where
> > > the database servers were networked together not by ethernet, but by
> > scsi or
> > > some other data connection to the storage media. This layout is kind
> > of like
> > > a bicycle tire, where the storage media is at the hub, and the
> > database
> > > servers are on the perimeter. They all share the same hard drive
> > cluster
> > > (RAID) for their reads and writes, therefore eliminating the need for
> > the
> > > staggered scheme detailed above. Replication in this instance would
> > only be
> > > needed for backup purposes.
> >
> > Yeah. I'm doing a SAN here. SAN's are generally not SCSI but fiber
> > channel. A SAN involves attaching your FC adapters to a SAN
> > switch. (I've not seen a SCSI SAN switch, but then I'd never looked to
> > see if such a thing did/could exist) The switch is cabled to the
> > drives as well. It's roughly analogous to a LAN, hence the name. I'm
> > not a DB admin, but I don't think you'd have multiple database
> > instances writing to the same database though. The locking could get
> > pretty complex.
> >
> > With clustering software you can do this with direct attached storage.
> > One box has to have hold of the drives though -- you can't read write
> > simultaneously. The software in conjunction with a private network
> > and some sort of detection mechanism determines if a cluster node has
> > failed and switches the mountpoint (and services if this is all
> > implemented properly) to the other box.
>--
>Craig Jackson
>Wildnet Group L.L.C.
>103 North Park, Suite 110
>Covington, Louisiana 70433
>985 875 9453
>
>___________________
>Nolug mailing list
>nolug@nolug.org

---
Dustin Puryear <dustin@puryear-it.com>
Puryear Information Technology
Windows, UNIX, and IT Consulting
http://www.puryear-it.com
___________________
Nolug mailing list
nolug@nolug.org
Received on 03/26/03

This archive was generated by hypermail 2.2.0 : 12/19/08 EST