[Nolug] Experiences with DM-MPIO on Linux hosts attached to Fiber Channel SAN

From: Scott Harney <scotth_at_scottharney.com>
Date: Wed, 23 Jan 2008 14:21:56 -0600
Message-ID: <4797A1E4.9040606@scottharney.com>

I'm curious if anyone has any experiences using Linux hosts on Fiber
Channel SAN with multipathed HBAs. Specifically I'd like to hear about:
1) Do you use in-kernel DM-MPIO (http://christophe.varoqui.free.fr/) or
a vendor-provided solution (eg. Veritas dmp)? What kinds of issues,
problems, have you had with either?
2) What HBAs do you use (Qlogic, Emulex)? Any driver issues or issues
with vendor provided tools?
3) What kind of backend storage are you connecting to (EMC Clariion, EMC
Symmetrix, Hitachi and/or Netapp are of particular interest)
4) As a wildcard, any experience with DM-MPIO over iSCSI rather than FC?

I'm in an environment with a fairly sizeable multiple vendor SAN with
primarily Solaris and Wintel hosts. On both Wintel and Solaris, Veritas
DMP (and VxVM) are used extensively. So there is some advantage in
having a common toolkit. For Solaris, Veritas is mature and common
place and there have been stability issues in the past with Solaris'
kernel MPXIO. We use primarily Sun-branded QLogic drivers with
Leadville HBA drivers (which is what ships with Solaris) and Veritas
layered on top. On the Wintel, side it's Emulex HBAs, Windows drivers,
with Veritas layered on top.

I'm leaning towards recommending sticking with Linux native DM-MPIO for
several reasons, but am curious if anyone has had any interesting
experiences to relay.
___________________
Nolug mailing list
nolug@nolug.org
Received on 01/23/08

This archive was generated by hypermail 2.2.0 : 12/19/08 EST