[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Re: Xen-users Digest, Vol 47, Issue 120



>ïgood deal of overhead).ïïIf I were doing any high I/O loads, I'd map the FC
>ïconnections directly through from my SAN to the domU and not do file-based

Hi,

Can you better explain this a little? I've tested using FC directly attached to 
the server to run the guests off of and have also tried using a filer exporting 
NFS to the guests and both seem to be I/O intensive.

I just posted about this in fact, trying to figure out what the best balance of 
things are.
See the "Network Storage" thread if you're interested.

Mike


>ïdisks.ïïAs it is, most of my domUs are things like Intranet servers, news
>ïservers (low-traffic), a few Windows XP domUs, etc., that are not I/O
>ïintensive.ïïI'm about to move my e-mail system over to this set of systems,
>ïbut I'll be passing that SAN volume through the dom0 to the domU.
>ï
>ï-Nick
>ï
>ï
>ï-----Original Message-----
>ïFrom: xen-users-request@xxxxxxxxxxxxxxxxxxx
>ïReply-To: xen-users@xxxxxxxxxxxxxxxxxxx
>ïTo: xen-users@xxxxxxxxxxxxxxxxxxx
>ïSubject: Xen-users Digest, Vol 47, Issue 120
>ïDate: Tue, 20 Jan 2009 08:50:10 -0700
>ï
>ïSend Xen-users mailing list submissions to ï ï ï ï 
>xen-users@xxxxxxxxxxxxxxxxxxx
>ïTo subscribe or unsubscribe via the World Wide Web, visit
>ïï ï ï ï http://lists.xensource.com/mailman/listinfo/xen-users or, via email, 
>send
>ïa message with subject or body 'help' to ï ï ï ï xen-users-
>ïrequest@xxxxxxxxxxxxxxxxxxx You can reach the person managing the list at
>ïï ï ï ï xen-users-owner@xxxxxxxxxxxxxxxxxxx When replying, please edit your
>ïSubject line so it is more specific than "Re: Contents of Xen-users
>ïdigest..." Today's Topics: 1. Re: Distributed xen or cluster? (Fajar A.
>ïNugraha) 2. Re: Distributed xen or cluster? (Fajar A. Nugraha) 3. Xen 3.3.0
>ï- QEMU COW disk image with sparse backing file - VM fails to start (Martin
>ïTr?ster) 4. sporadic problems relocating guests (J. D.) 5. Re: Distributed
>ïxen or cluster? (Nick Couchman)
>ïemail message attachment ï ï-------- Forwarded Message --------
>ïFrom: Fajar A. Nugraha <fajar@xxxxxxxxx>
>ïTo: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
>ïSubject: Re: [Xen-users] Distributed xen or cluster?
>ïDate: Tue, 20 Jan 2009 22:09:24 +0700
>ï
>ïOn Tue, Jan 20, 2009 at 8:21 PM, Nick Couchman <Nick.Couchman@xxxxxxxxx>ï
>ïwrote: >ïI use SLES10 SP2 for my dom0, which has a few tools that make this
>ïpossible: >ï- EVMS + Heartbeat for shared block devices >ï- OCFS2 for a
>ïclustered filesystem >ï- Heartbeat for maintaining availability. >ï>ïI have
>ïa volume shared out from my SAN that's managed with EVMS on each of >ïmy
>ïXen servers. I created an OCFS2 filesystem on this volume and have it >ï
>ïmounted on all of them. That setup sounds like it has a lot of overhead. In
>ïparticular, AFAIK a clustered file system (like OCFS2) has a lower I/O
>ïthroughput (depends on the workload) compared to non-clustered FS. What
>ïkind of workload do you have on your domU's? Are they I/O-hungry (e.g. busy
>ïdatabase servers)? Also considering that (according to Wikipedia) : - IBM
>ïstopped developing EVMS in 2006 - Novell will be moving to LVM in future
>ïproducts IMHO it'd be better, performance and support wise, to use cLVM and
>ïput domU's fs on LVM-backed storage. Better yet, have your SAN give each
>ïdomU it's own LUN and let all dom0s see them all. domU config files should
>ïstill be in a cluster FS (OCFS2 or GFS/GFS2) though. Regards, Fajar
>ïemail message attachment ï ï-------- Forwarded Message --------
>ïFrom: Fajar A. Nugraha <fajar@xxxxxxxxx>
>ïTo: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
>ïSubject: Re: [Xen-users] Distributed xen or cluster?
>ïDate: Tue, 20 Jan 2009 22:19:26 +0700
>ï
>ïOn Tue, Jan 20, 2009 at 1:59 PM, lists@xxxxxxxxxxxx <lists@xxxxxxxxxxxx>ï
>ïwrote: >ïThanks Mark, I'm just reading on it now. Sounds like it allows
>ïfail over but am not sure that it's an actual cluster, as in redundancy?
>ïExactly what kind of redundancy are you looking for? Is it (for example)
>ïhaving several domUs serving the same web content and having a load
>ïbalancer in front of them so traffic is balanced among working domUs?
>ïemail message attachment ï ï-------- Forwarded Message --------
>ïFrom: Martin Trïster <TroyMcClure@xxxxxx>
>ïTo: xen-users@xxxxxxxxxxxxxxxxxxx
>ïSubject: [Xen-users] Xen 3.3.0 - QEMU COW disk image with sparse backing
>ïfile - VM fails to start
>ïDate: Tue, 20 Jan 2009 16:22:12 +0100
>ï
>ïHi, I upgraded my Xen 3.2-based test system to Xen 3.3. With this
>ïinstallation, I am no longer able to start images with COW disk images
>ïbased on qcow sparse files. Starting the DomU via xm create gives success
>ïstatus (with "Startig Domain test" message printed to console), but xm list
>ïdirectly afterwards shows no entry for the VM. The image structure is: Base
>ïFile A.img (sparse QCOW2 file) COW File B.img (QCOW2 file with A.img as
>ïbacking file) used as disk in Xen config file (see VM config file below) I
>ïam running a CentOS 5.2 server with the pre-build packages at
>ïhttp://www.gitco.de/repro and can repro this behaviour on both 3.3.0 and
>ï3.3.1RC1_pre (no 3.3.1 final version available, no time yet to do a build
>ïon my own). In detail, I see the following behaviour: ->ïRAW image with
>ïdisk using file:/ - works ->ïQCOW sparse image using tap:qcow - works ->ï
>ïQCOW image based on RAW image using tap:qcow - works ->ïQCOW image based on
>ïQCOW sparse image using tap:qcow - fails. Logs of failing case:
>ï============================= /var/log/xen/xend.log shows that the domain
>ïimmediately terminates, but no ERROR indication: [2009-01-20 09:13:55
>ï20596] DEBUG (XendDomainInfo:1443) XendDomainInfo.handleShutdownWatch [2009-
>ï01-20 09:13:55 20596] DEBUG (DevController:155) Waiting for devices vif.
>ï[2009-01-20 09:13:55 20596] DEBUG (DevController:160) Waiting for 0. [2009-
>ï01-20 09:13:55 20596] DEBUG (DevController:645) hotplugStatusCallback
>ï/local/domain/0/backend/vif/8/0/hotplug-status. [2009-01-20 09:13:55 20596]
>ïDEBUG (DevController:645) hotplugStatusCallback
>ï/local/domain/0/backend/vif/8/0/hotplug-status. [2009-01-20 09:13:55 20596]
>ïDEBUG (DevController:659) hotplugStatusCallback 1. [2009-01-20 09:13:55
>ï20596] DEBUG (DevController:160) Waiting for 1. [2009-01-20 09:13:55 20596]
>ïDEBUG (DevController:645) hotplugStatusCallback
>ï/local/domain/0/backend/vif/8/1/hotplug-status. [2009-01-20 09:13:55 20596]
>ïDEBUG (DevController:659) hotplugStatusCallback 1. [2009-01-20 09:13:55
>ï20596] DEBUG (DevController:155) Waiting for devices vscsi. [2009-01-20
>ï09:13:55 20596] DEBUG (DevController:155) Waiting for devices vbd. [2009-01-
>ï20 09:13:55 20596] DEBUG (DevController:155) Waiting for devices irq. [2009-
>ï01-20 09:13:55 20596] DEBUG (DevController:155) Waiting for devices vkbd.
>ï[2009-01-20 09:13:55 20596] DEBUG (DevController:155) Waiting for devices
>ïvfb. [2009-01-20 09:13:55 20596] DEBUG (DevController:155) Waiting for
>ïdevices console. [2009-01-20 09:13:55 20596] DEBUG (DevController:160)
>ïWaiting for 0. [2009-01-20 09:13:55 20596] DEBUG (DevController:155)
>ïWaiting for devices pci. [2009-01-20 09:13:55 20596] DEBUG
>ï(DevController:155) Waiting for devices ioports. [2009-01-20 09:13:55
>ï20596] DEBUG (DevController:155) Waiting for devices tap. [2009-01-20
>ï09:13:55 20596] DEBUG (DevController:160) Waiting for 51712. [2009-01-20
>ï09:13:55 20596] DEBUG (DevController:645) hotplugStatusCallback
>ï/local/domain/0/backend/tap/8/51712/hotplug-status. [2009-01-20 09:13:55
>ï20596] DEBUG (DevController:659) hotplugStatusCallback 1. [2009-01-20
>ï09:13:55 20596] DEBUG (DevController:155) Waiting for devices vtpm. [2009-
>ï01-20 09:13:55 20596] INFO (XendDomain:1172) Domain test (8) unpaused.
>ï[2009-01-20 09:13:57 20596] INFO (XendDomainInfo:1634) Domain has shutdown:
>ïname=test id=8 reason=poweroff. [2009-01-20 09:13:57 20596] DEBUG
>ï(XendDomainInfo:2389) XendDomainInfo.destroy: domid=8 [2009-01-20 09:13:57
>ï20596] DEBUG (XendDomainInfo:2406) XendDomainInfo.destroyDomain(8) [2009-01-
>ï20 09:13:57 20596] DEBUG (XendDomainInfo:1939) Destroying device model
>ï[2009-01-20 09:13:57 20596] DEBUG (XendDomainInfo:1946) Releasing devices
>ï[2009-01-20 09:13:57 20596] WARNING (image:472) domain test: device model
>ïfailure: no longer running; see /var/log/xen/qemu-dm-test.log
>ï/var/log/xen/qemu-dm-test.log shows nothing spectacular at all (at least
>ïfor me): domid: 8 qemu: the number of cpus is 1 config qemu network with
>ïxen bridge for tap8.0 xenbr0 config qemu network with xen bridge for tap8.1
>ïeth0 Using xvda for guest's hda Strip off blktap sub-type prefix to
>ï/path/to/test.img (drv 'qcow') Watching /local/domain/0/device-
>ïmodel/8/logdirty/next-active Watching /local/domain/0/device-
>ïmodel/8/command qemu_map_cache_init nr_buckets = 10000 size 3145728 shared
>ïpage at pfn 3fffe buffered io page at pfn 3fffc Time offset set 0 Register
>ïxen platform. Done register platform. /var/log/messages also shows nothing
>ïremarkable. Interesting entries never seen before are: Jan 19 17:50:02 test
>ïkernel: tapdisk[21929] general protection rip:40b315 rsp:42900108 error:0
>ïbut they occur all the time on Xen 3.3 and Xen 3.3.1 using qcow images (but
>ïseem to be recovered) VM config file: --------------------------------------
>ï---------------------------------------------------- name = "test"
>ïdevice_model = '/usr/lib64/xen/bin/qemu-dm' builder = "hvm" kernel =
>ï"/usr/lib/xen/boot/hvmloader" # hardware memory = "1024" disk = [
>ï'tap:qcow:/path/to/B.img,ioemu:xvda,w' ] vcpus=1 # network vif = [
>ï'type=ioemu,mac=02:00:00:00:00:01','type=ioemu,bridge=eth0,mac=02:00:00:00:01:
>ï98' ] dhcp = "dhcp" # visualization sdl = 0 vnc = 1 vncviewer = 0 ----------
>ï----------------------------------------------------------------------------
>ï---- Any help is appreciated. Thanks! Cheers, Martin
>ï____________________________________________________________________ Psssst!
>ï Schon vom neuen WEB.DE MultiMessenger gehrt? Der kann`s mit allen:
>ïhttp://www.produkte.web.de/messenger/?did=3123
>ïemail message attachment ï ï-------- Forwarded Message --------
>ïFrom: J. D. <jdonline@xxxxxxxxx>
>ïTo: Xen Users <xen-users@xxxxxxxxxxxxxxxxxxx>
>ïSubject: [Xen-users] sporadic problems relocating guests
>ïDate: Tue, 20 Jan 2009 10:50:40 -0500
>ï
>ïHello all,
>ï
>ïI am experiencing some problems relocating guests. I could not relocate the
>ïguest squid to the node physical node xen01
>ïOther guests would migrate to xen01 without issue. I rebooted the node and
>ïnow I can relocate my squid guest to xen01.
>ïNow however I am finding that I can no longer relocate the squid guest to
>ïxen00. The errors below are what I am seeing
>ïin the messages log on xen00.
>ï
>ïWe are on redhat 5.2 using cluster suite and the stock xen. Any ideas?
>ï
>ï
>ïJan 20 10:03:07 xen00 clurgmgrd[11135]: <warning>ï#68: Failed to start
>ïvm:squid; return value: 1
>ïJan 20 10:03:07 xen00 clurgmgrd[11135]: <notice>ïStopping service vm:squid
>ïJan 20 10:03:07 xen00 kernel: xenbr0: port 7(vif9.1) entering disabled state
>ïJan 20 10:03:07 xen00 kernel: device vif9.1 left promiscuous mode
>ïJan 20 10:03:07 xen00 kernel: xenbr0: port 7(vif9.1) entering disabled state
>ïJan 20 10:03:13 xen00 clurgmgrd[11135]: <notice>ïService vm:squid is
>ïrecovering
>ïJan 20 10:03:13 xen00 clurgmgrd[11135]: <warning>ï#71: Relocating failed
>ïservice vm:squid
>ïJan 20 10:03:15 xen00 clurgmgrd[11135]: <notice>ïService vm:squid is now
>ïrunning on member 3
>ï
>ïBest regards,
>ï
>ïJ. D.
>ï
>ïemail message attachment ï ï-------- Forwarded Message --------
>ïFrom: Nick Couchman <Nick.Couchman@xxxxxxxxx>
>ïTo: rbeglinger@xxxxxxxxx
>ïCc: xen-users@xxxxxxxxxxxxxxxxxxx
>ïSubject: Re: [Xen-users] Distributed xen or cluster?
>ïDate: Tue, 20 Jan 2009 09:01:55 -0700
>ï
>ïMy SAN is Active-Active, but you still should be able to accomplish this
>ïeven with an Active-Passive SAN.ïïThis shouldn't be an issue with normal
>ïoperations at all - it'll just be a matter of whether things can fail over
>ïcorrectly in the event of a SAN controller failure.
>ï
>ï-Nick
>ï
>ï
>ï-----Original Message-----
>ïFrom: Rob Beglinger <rbeglinger@xxxxxxxxx>
>ïTo: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
>ïSubject: Re: [Xen-users] Distributed xen or cluster?
>ïDate: Tue, 20 Jan 2009 08:26:24 -0600
>ï
>ïNick,
>ï
>ïIs your SAN an Active-Active or Active-Passive SAN?ïïI'm looking to set
>ïsomething up like what you're doing, but my SAN only supports Active-
>ïPassive.ïïWe originally looked at Win2K8 with Hyper-V but fortunately that
>ïrequires a SAN that supports Active-Active configuration.ïïI'm using SLES
>ï10 SP2 for dom0, and will be running SLES 10 SP2 domU's as well.ïïI am
>ïrunning Xen 3.2.
>ï
>ïOn Tue, Jan 20, 2009 at 7:21 AM, Nick Couchman <Nick.Couchman@xxxxxxxxx>ï
>ïwrote:
>>ïI use SLES10 SP2 for my dom0, which has a few tools that make this
>>ïpossible:
>>ï- EVMS + Heartbeat for shared block devices
>>ï- OCFS2 for a clustered filesystem
>>ï- Heartbeat for maintaining availability.
>
>>ïI have a volume shared out from my SAN that's managed with EVMS on each
>>ïof my Xen servers.ïïI created an OCFS2 filesystem on this volume and have
>>ïit mounted on all of them.ïïThis way I do file-based disks for all of my
>>ïdomUs and they are all visible to each of my hosts.ïïI can migrate the
>>ïdomUs from host to host.ïïI'm in the process of getting Heartbeat setup
>>ïto manage my domUs - Heartbeat can be configured to migrate VMs or
>>ïrestart them if one of the hosts fails.
>
>>ïIt isn't a "single-click" solution - it takes a little work to get
>>ïeverything running, but it does work.
>
>>ï-Nick
>
>>>>>ï"lists@xxxxxxxxxxxx" <lists@xxxxxxxxxxxx>ï2009/01/19 23:37 >>>
>>>>>ï
>>ïAnyone aware of any clustering package for xen, in order to gain
>>ïredundancy, etc.
>
>>ïMike
>
>
>>ïThis e-mail may contain confidential and privileged material for the sole
>>ïuse of the intended recipient. If this email is not intended for you, or
>>ïyou are not responsible for the delivery of this message to the intended
>>ïrecipient, please note that this message may contain SEAKR Engineering
>>ï(SEAKR) Privileged/Proprietary Information. In such a case, you are
>>ïstrictly prohibited from downloading, photocopying, distributing or
>>ïotherwise using this message, its contents or attachments in any way. If
>>ïyou have received this message in error, please notify us immediately by
>>ïreplying to this e-mail and delete the message from your mailbox.
>>ïInformation contained in this message that does not relate to the
>>ïbusiness of SEAKR is neither endorsed by nor attributable to SEAKR.
>
>>ï_______________________________________________
>>ïXen-users mailing list
>>ïXen-users@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-users
>ï
>ï
>ïThis e-mail may contain confidential and privileged material for the sole
>ïuse of the intended recipient. If this email is not intended for you, or
>ïyou are not responsible for the delivery of this message to the intended
>ïrecipient, please note that this message may contain SEAKR Engineering
>ï(SEAKR) Privileged/Proprietary Information. In such a case, you are
>ïstrictly prohibited from downloading, photocopying, distributing or
>ïotherwise using this message, its contents or attachments in any way. If
>ïyou have received this message in error, please notify us immediately by
>ïreplying to this e-mail and delete the message from your mailbox.
>ïInformation contained in this message that does not relate to the business
>ïof SEAKR is neither endorsed by nor attributable to SEAKR.
>ï
>ï
>ï_______________________________________________ Xen-users mailing list Xen-
>ïusers@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
>ïThis e-mail may contain confidential and privileged material for the sole
>ïuse of the intended recipient. If this email is not intended for you, or
>ïyou are not responsible for the delivery of this message to the intended
>ïrecipient, please note that this message may contain SEAKR Engineering
>ï(SEAKR) Privileged/Proprietary Information. In such a case, you are
>ïstrictly prohibited from downloading, photocopying, distributing or
>ïotherwise using this message, its contents or attachments in any way. If
>ïyou have received this message in error, please notify us immediately by
>ïreplying to this e-mail and delete the message from your mailbox.
>ïInformation contained in this message that does not relate to the business
>ïof SEAKR is neither endorsed by nor attributable to SEAKR.



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.