[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Strange interdependace between domains
On ven, 2014-02-14 at 12:02 +0000, Simon Martin wrote: > Thanks everyone and especially Ian! It was the hyperthreading that was > causing the problem. > Good to hear, and at the same time, sorry to hear that. :-) I mean, I'm glad you nailed it, but at the same time, I'm sorry that the solution is to 'waste' a core! :-( I reiterate and restate here, without any problem doing so, the fact that Xen should be doing at least a bit better in these circumstances, if we want to properly address use cases like yours. However, there are limits on how far we can go, and hardware design is certainly among them! All this to say that, it should be possible to get a bit more of isolation, by tweaking the proper Xen code path appropriately, but if the amount of interference that comes from two hypethreads sharing registers, pipeline stages, and whatever it is that they share, is enough for disturbing your workload, then, I'm afraid we never get much farther from the 'don't use hyperthread' solution! :-( Anyways, with respect to the first part of this reasoning, would you mind (when you've got the time of course) one more test? If no, I'd say, configure the system as I was suggesting in my first reply, i.e., using also core #2 (or, in general, all the cores). Also, make sure you add this parameter, to the Xen boot command line: sched_smt_power_savings=1 (some background here: http://lists.xen.org/archives/html/xen-devel/2009-03/msg01335.html) And then run the bench with disk activity on. > Here's my current configuration: > > # xl cpupool-list -c > Name CPU list > Pool-0 0,1 > pv499 2,3 > # xl vcpu-list > Name ID VCPU CPU State Time(s) CPU > Affinity > Domain-0 0 0 0 r-- 16.6 0 > Domain-0 0 1 1 -b- 7.3 1 > win7x64 1 0 1 -b- 82.5 all > win7x64 1 1 0 -b- 18.6 all > pv499 2 0 3 r-- 226.1 3 > > I have pinned dom0 as I wasn't sure whether it belongs to Pool-0 (I > assume it does, can you confirm please). > Actually, you are right. It looks like there is no command or command parameter telling explicitly to which pool a domain belong [BTW, adding Juergen, who knows that for sure]. If that is the case, we really should add one. BTW, If you boot the system and then create the (new) pool(s), all the existing domains, including Dom0, at pool(s) creation time will stay in the "original" pool, while the new pool(s) will be empty. To change that, you'd have to either migrate existing domain into specific pools with `xl cpupool-migrate', or create them specifying the proper option and the name of the target pool in the config file (which is probably what you're doing for your DomUs). I guess a workaround to confirm where a domain is, you can (try to) migrate it around with `xl cpupool-migrate', and see what happens. > Dario, if you are going to look at the > Is something missing here... ? Dario -- <<This happens because I choose it to happen!>> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |