On Sun, Oct 23, 2011 at 11:35 AM, Jimmy Hess <mysidia@gmail.com> wrote:
> On Sun, Oct 23, 2011 at 9:34 AM, B. Estrade <estrabd@gmail.com> wrote:
> > On Sat, Oct 22, 2011 at 06:30:00AM -0500, Clint Billedeaux wrote:
> >> On Fri, Oct 21, 2011 at 7:54 PM, Ron Johnson <ron.l.johnson@cox.net>
> wrote:
>
> > Think about this; if you have a 4 core machine, you shouldn't run more
> > than 3 images if you expect to have "good performance"; further more,
>
> "Good performance" in a set of metrics you care about also depends on
> the application.
> If you measure performance in "I expect to boot a VM at any time, have
> it boot to a XP
> desktop in 30 seconds, double click on such and such icon, open MS
> Office within 10 seconds,
> and have cnn.com open up within 2 seconds of double clicking "; then
> you are more
> stringent than if you defined good performance "A VM can take 10
> minutes to boot,
> and I expect to be able to get to a web page within 5 minutes"
>
>
> If you are virtualizing to aggregate servers onto as little metal as
> possible, then 3:4,
> that is 3 virtual CPU threads to 4 physical cores, is really a
> very low consolidation ratio.
> 4:1, 8:1, and 16:1, are good. For most common server workloads,
> CPU is easy to
> overcommit, and memory is often a bottleneck.
>
> For VMs running on Linux, I think you can achieve at least 2:1 or 4:1
> for many workloads,
> without a performance issue -- the more you want to overcommit, the
> more planning you need.
> To exceed 2:1, you will want some CPU features, hardware assist to
> offload certain aspects of your
> virtual machine monitor to hardware, specifically VT, VT-x,
> RVI/VT-d, NIC functionality,
> and the NX/XD security feature.
>
> None of that does any good, however, unless your BIOS and motherboard
> chipset support
> the CPU feature, has it enabled, and your virtual machine software
> also supports it.
> Often CPU/motherboard manufacturers will "segment" their market, and
> artificially cripple their
> CPU by disabling the "performance accelerating feature" for desktop
> models.
>
> ECC memory is also something to look for on many VMs to a single
> workstation; the problem
> with non-ECC memory, is if you have it, and it goes bad, you are more
> likely to have critical issues
> in the vein of data corruption, as over time... different VMs and
> different parts of VMs memory areas
> live on the "bad chip", random unexplainable blue screening and
> filesystem corruption are more
> likely than they would be in the "single host" situation, where
> likely only applications would be
> effected.
>
> Obviously, if you are running your VMs on top of Linux instead of a
> bare metal hypervisor,
> the general purpose OS is not designed to schedule the VMX processes
> efficiently, and
> the memory manager is not really taylored to make the right decisions
> about how to allocate
> memory for the workload. You are also layering two levels of resource
> management on top
> of each other, which creates overhead and causes poorer performance.
>
> > memory to map more than 1:1 to VM memory, you don't have unfettered
> > access to this memory on the VMs because you are actually contributing
> > to a great amount of memory contention on the host OS.
>
> If your OS is lightweight; and your VM memory usage is less than the
> available memory,
> the situation you have isn't contention with applications _amount of
> available memory_.
> You might have memory _access contention_, but again that's usually
> not an issue,
> depending on how much memory and how many VMs you are actually using.
>
> Although the OS may see a large amount of memory and wish to use it
> for filesystem cache.
> What's clear though... is the memory overcommit in Linux is no good
> for VM workloads.
> If you do overcommit, swapping will not be intelligent, and will
> totally compromise the performance
> of the system.
>
> I would say make sure you when using Linux to host VMs, have at least
> 20% more physical
> memory than you assign to VMs (if more than 2 or 3 VMs, ECC memory,
> of course, on a supporting MB).
>
>
> Advanced memory overcommit techniques, for the most part, are only
> meaningful on bare metal.
> Possible exception of "Kernel samepage", but really, the feature lacks
> maturity.
>
> > just taking advantage of the locality of data. It doesn't address
> > memory paging issues brought in by heavy memory use of multiple
> > processes; it also doesn't address the bottle necks at the networking
> > or I/O level.
>
> If your server has some decent NICs, the primary networking concern
> with typical network workloads would be packets per second.
> Remember, in essence you need to interrupt 2 extra CPU cores every
> time a packet is received or sent just to copy the data, instead of 0
> CPU cores on a physical host where data is DMA'd, and that's in a
> bare metal hypervisor situation (without SRIOV) -- if you are
> utilizing NAT mode networking or routing on your VM host, it's worse.
>
> For local storage I/O you should be concerned with how many IOPs and
> how much bandwidth your total VM workload requires, your storage
> system capabilities will also depend on how random/distributed across
> VMs your IO workload is, and how well distributed the requests are
> between reads and writes.
>
> Your workstation's amount of controller cache and IO rate capacity are
> the major factors, the physical constraints of your controller are
> not exceeded by having multiple workloads; sizing the IO subsystem for
> a workstation running 20VMs is similar to sizing the IO subsystem of
> a database server.
>
> > even use a host OS because it just gets in the way. You need something
> > like Xen or hypervisor that serves as a very basic layer without
>
> Something like XenClient with a standalone install, if they have that
> available still.
> For pure server workloads, ESXi or XenServer :)
>
> --
> -JH
> ___________________
> Nolug mailing list
> nolug@nolug.org
>
Wow, I wish I could work in an environment where all this was bombarding me
on a daily basis. For now, I'll settle for being the big fish in the small
pond who deals with small stuff.
Essentially my focus is still not losing the old XP machines while getting
rid of 8 year old towers. I just went with making vhd files and uploading
them to the new towers and running them in VirtualBox because it's an
inexpensive and quick solution to a small problem. And I figured that
windows 7 was just too much overhead for running XP inside VirtualBox, but
you guys are opening up a whole different can of worms that I'm going to be
researching for a few weeks just to make sure I'm not going in entirely a
bad direction. I'd had to put my boss' operation in jeopardy just because I
"thought" I had a solution.
Thanks for the discussion,
Clint
___________________
Nolug mailing list
nolug@nolug.org
Received on 10/23/11
This archive was generated by hypermail 2.2.0 : 10/23/11 EDT