[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Give dom0 2 pinned vcpus, but share one with domU



rsync with compression will use up more CPU resources. If you do a backup between disks on the same PC, I would probably not use compression as the "network" speed on a bridged network between domU and dom0 should be in the 10Gig.

As an example, I do use compression for creating backup images of my domU. Inside dom0, after making a LVM snapshot, I use "pigz" like this:
dd if=/dev/$group/$guest-snap bs=1024k | pigz -c > "$mt"/$group-$guest.img.gz
to create a compressed image file of the VM.
pigz uses all available VCPUs for compression. Since I'm not limiting the number of vcpus on my dom0, and my domU isn't running during the backup, it works very well (~7min for a 70GB domU volume).

By not pinning down or limiting the dom0 Xen is able to allocate whatever CPU resources it needs between dom0 and domU. If you run into a situation that your domU CPU load is maxed out while at the same time you have I/O intensive and perhaps CPU intensive dom0 loads, you could try increasing the scheduler priority for dom0. But it would probably take some experimenting to see what difference it makes, if any.
On Tuesday, May 20, 2014 3:24 PM, jumperalex <alex@xxxxxxxxxxxxxx> wrote:



> How about NOT pinning / Why am I pinning?

In short because of this
http://wiki.xen.org/wiki/Xen_Project_Best_Practices#Dedicating_a_CPU_core.28s.29_only_for_dom0
I'm just doing what I'm told :O  But I'm obviously open to suggestion.

Now I can't claim my dom0 is doing HEAVY I/O but it is hosting my unraid
array so any VM (one at this point running Plex Media Server) will be
pulling 1080p video streams from it to transcode (fulfilling the heave domU
workload bit) and then sending it back out to the clients on the network.
Soon I too plan on running some handbrake runs which will probably have my
server screaming for several days straight and then about twice a week.
Those could be scheduled during times of the day I know there won't likely
be user interaction but it will just prolong the overall job of converting
my whole library. At the same time it is possible, but rare due to
scheduling, that I could be hitting the array with two backup streams coming
from PC's running Acronis. 

That is not quite the worst case scenario but the most likely. I could throw
in a few other processes that do occur which are also pretty I/O heavy but
those are really unlikely to overlap or it will happen when no one is around
to see it.  And the two main culprits, my cpu heavy rsync and plex
transcoding literally couldn't have happened at the same time because the VM
gets paused to run the rsync copy of the VM image :)  As you'll see below
though I've also solved the cpu hogging rsync

All that said, I'm fully willing to admit I'm probably spending 95% of my
time chasing the last 5% of performance, but I like at least poking around
to make sure I haven't left something huge ripe for the taking.

> There is an option to adjust the credit scheduler  - see
> http://wiki.xen.org/wiki/Credit_Scheduler. More on Xen tuning can be found
> http://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance. See also
> http://wiki.xen.org/wiki/Performance_of_Xen_VCPU_Scheduling.

Thanks. I will definitely take a look.  If done right that seems like an
even more elegant solution.


> Note though that depending on your workload pinning (especially dom0)
> might be actively harmful. Is there some reason you want to pin rather
> than letting dom0's vcpus float?

Well I know my dom0 workload is generally pretty light from a CPU
perspective.  Even a single core from an FX-8320 would generally be
considered overkill for just handling the day-to-day of an unraid array.
What even brought it up was an rsync to backup my domU.img into the dom0
array was just crushing my dom0 cpu and choking off the rsync.  BUT ... I
found the main issue which was the use of -z for compression in rsync
between local folders.  Once I turned that off cpu usage dropped and speed
took off.  So I've solved my current problem via efficiency vs. brute force
(my preferred way), but it still has me thinking it might not be a bad idea
to let dom0 have the option of a little bit more.

I did try it out last night while watching xl vcpu-list, xl top, and htop in
both dom's.  I ran rsync with -z and noticed improvement which didn't
surprise me. Then I ran a transcode.  It is hard to confirm performance
improvements there if you're just going from 6 cpus to 7 so I was mostly
just looking to see that seven distinct PCPU's were being used. At first I
wasn't sure I was really seeing pcpu1 being shared like you said, but after
I looked at the screen shots later in the evening I convinced myself maybe
it was working as hoped.  Then I woke up to your post.  So I'll probably
change it back again and observe some more while I also read up on Credit
Scheduling.

Thank you for indulging me.  Cheers.



--
View this message in context: http://xen.1045712.n5.nabble.com/Give-dom0-2-pinned-vcpus-but-share-one-with-domU-tp5722792p5722800.html

Sent from the Xen - User mailing list archive at Nabble.com.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.