[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] XCP Test workstation



On Wed, Oct 26, 2011 at 1:51 PM, Brett Westover <bwestover@xxxxxxxxxxx> wrote:
>I'm sure there will be various opinions on this but after going down the "really expensive but rarely upgradable" path I just build commodity servers for XCP now. Each host has a 2u case, hexicore cpu, 16 GB of ram, a local drive that's currently used for nothing much and two network interfaces. With each I buy a 240 GB SSD that goes in the SAN. So for $1000 I can drop one machine into the rack, plug in power, both network cables and slide the SSD into the SAN box and I've just expanded capacity by 30 VMs. Each VM gets 512 MB of ram and 7 GB of storage space. Any other storage can be pulled from larger disk based SAN shares via iSCSI or NFS. This allows me to expand very quickly and at a minimal cost.

>For testing purposes you could do without the SSD drives. I'm looking at building a 128 core cloud to teach cloud computing using the same types of replaceable hosts. The cores will be divided up into smaller 16 core clouds each given to team of 4 students.

>I think it would all depend on what you plan on doing with your cloud.


>Grant McWilliams
>http://grantmcwilliams.com/


I agree with that philosophy, but what how do you ensure compatibility both with the XCP software, and with each other?

I started with the XenServer HCL that Citrix provides, and it led me to believe it was fairly strict on what would work.
(Or at least what they were willing to support, which I know is different)

It's just CentOS so it works with most pieces of hardware that RHEL5/CentOS5 works with. Back in the .5 beta days I found things that XCP wouldn't work on that CentOS did but anymore with 1.1 beta and newer it seems to be more equal. So basically grab a motherboard off the shelf and it will probably work fine. However, having said that pick your hardware wisely just like you would any other. Just because XCP runs on it doesn't mean the design is stable enough for your project. I feel that I have a bit more flexibility than I did with stock Xen because it's so easy to set up a system that can migrate with XCP. This allows me to be a bit more aggressive with my hardware.

Â
When you say "build commodity servers" are you referring to something "standard" from Dell, HP, IBM or do you mean "build" as in buy a case, and a board/CPU/RAM, hard disks etc?

I started out with purpose built rackmount systems that ran about $500 per core. Not super expensive big iron stuff but definitely server hardware. Now I buy off the shelf rackmount cases (with no drive bays), motherboards, CPUs and ram. The only thing it needs on the motherboard is lots of cores, lots of ram and a network interface or two. That and the design has to be sound enough not to be flaky. So far this is working.

Â
What you describe sounds like a dream compared to where we are currently. To expand capacity by that same amount we'd be looking at 10x the cost for hardware and software.

Brett Westover



I got tired of ending up with hardware that costs too much to upgrade (damn you Intel) so now I buy hardware that's cheap enough to replace. The main concern however is you need to think long and hard about your CPU choice because pools like to have the same CPU in them. I like more less powerful cores over less more powerful cores so I'm building a lot of AMD hexicore systems. The new cloud will probably be AMD 8 core boards.


Grant McWilliams
http://grantmcwilliams.com/

Some people, when confronted with a problem, think "I know, I'll use Windows."
Now they have two problems.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.