[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen Platform QoS design discussion

> On 29/05/2014 08:31, Xu, Dongxiao wrote:
> >> -----Original Message-----
> > Okay. If I understand correctly, you prefer to implement a pure MSR access
> hypercall for one CPU, and put all other CQM things in libxc/libxl layer.
> >
> > In this case, if libvert/XenAPI is trying to query a domain's cache 
> > utilization in
> the system (say 2 sockets), then it will trigger _two_ such MSR access 
> hypercalls
> for CPUs in the 2 different sockets.
> > If you are okay with this idea, I am going to implement it.
> >
> > Thanks,
> > Dongxiao
> While I can see the use and attraction of a generic MSR access
> hypercalls, using this method for getting QoS data is going to have
> subsantitally higher overhead than even the original domctl suggestion.
> I do not believe it will be an effective means of getting large
> quantities of data from ring0 MSRs into dom0 userspace.  This is not to
> say that having a generic MSR interface is a bad thing, but I don't
> think it should be used for this purpose.

They are the two directions to implement this feature:

Generic MSR access hypercall: It removes most of the policy in hypervisor and 
put them in Dom0 userspace, which makes the implementation in hypervisor very 
simple, with slightly higher cost (more hypercalls).
Sysctl hypercall: Hypervisor needs to consolidate all the CQM data which 
burdens the implementation, with more memory allocation, data sharing mechanism 
(2-level, page's address page, and page itself), also more policies in 
hypervisor, etc. While its cost is slightly less.

In my opinion, domctl is a compromise between two approaches, with moderate 
policies in hypervisor, moderate size of memory allocation, with moderate cost.

I would prefer domctl or generic MSR access way, which makes the implementation 
in hypervisor simple enough, but with only slightly higher cost.


> ~Andrew

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.