Re: [Nolug] Quick question on clustering...

From: Scott Harney <scotth_at_scottharney.com>
Date: Wed, 26 Mar 2003 14:09:51 -0600
Message-ID: <87r88topo0.fsf@zenarcade.local.lan>

Craig Jackson <craig.jackson@wild.net> writes:

> Actually, I need to get my terminolgy right and I'm not sure how to do
> that. Let's see, openmosaic is a clustering kernel patch used for
> coordinating linux servers for clustering. Is openmosaic what one would

openmosaic isn't familiar to me. LVS is one way to do it.
Quick and dirty overview of network load balancing (better term for this):
LVS runs and dns points to a coordinating LVS server at www.yoursite.com.
When you make a http request to www.yoursite.com, the initial request
hits the LVS box which then makes a decision on which actual box to direct
subsequent queries to from your address. The "decision" referenced could
be as simple as round robin or it could utilize some sort of
connection to query the individual web servers for load information and
sent requests to the most lightly loaded box.

That's the real short answer. I'm skipping a LOT here. This is just
the general concept of network load balancing.

Just to confuse you further: for real high availability, you can
cluster your LVS servers so if one box fails, the other takes over at
the public IP address for the LVS. Of course you might need to take a
step further and have your "site" at diverse physical locations and
thus have multiple LVS's coordinating with routing information (BGP
tables) and such to direct connections. Akamai is one company that
does this sort of thing. you'll note in some url's you see from yahoo
and msnbc stuff like "g.akamaikitech.ablasd93ch.blahblah" and such.

How important is your resource? ie how much is it worth?

There are a fair amount of resources on load balancing Apache web
servers. It can get pretty complex, pretty fast though.

> use to cluster webservers delivering static content? In the case of
> dynamic content and database connection, the large backend database as
> Joey, Ron, and Dustin spoke of would come into play. That would require
> strictly ethernet.

A large backend with appropriate RAID and backup would come into
play. A mysql/postgres setup doing replication to a slave box that you
could manually failover to would probably work for most scenarios. If
you're running internal DNS and all your web servers access your
Database via an internal hostname, it would be fairly easy to failover
to you replicated copy. A little magic with some sort of
service-checking scripted daemon could automated this. I imagine
some folks have done this sort of thing with nagios.

> If one uses a SANS, then fiber channel is used and in
> the above scenario, several databases are attached to the SANS device
> and possibly even the web servers themselves. Correct me if I'm wrong in
> my reading.

SANs do cost big bucks. You might be able to find relatively cheap
FC drives and adapters but SAN switches are very pricy.

> Sound remarkably fault tolerant. And possibly expensive, esp for the
> SANS.

-- 
Scott Harney<scotth@scottharney.com>
"...and one script to rule them all."
gpg key fingerprint=7125 0BD3 8EC4 08D7 321D CEE9 F024 7DA6 0BC7 94E5

___________________
Nolug mailing list
nolug@nolug.org

Received on 03/26/03

This archive was generated by hypermail 2.2.0 : 12/19/08 EST