[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] [Design] Better XL support of RTDS scheduler for Xen 4.6



Hi all,

2015-02-24 11:35 GMT-05:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>:
> On Tue, 2015-02-24 at 14:16 +0000, Ian Campbell wrote:
>> On Tue, 2015-02-24 at 11:30 +0000, Dario Faggioli wrote:
>
>> > Semantically speaking, they just should be killed. OTOH, what I was
>> > suggesting was this: if one calls libxl_domain_sched_params_set(), which
>> > takes a libxl_domain_sched_params, the budget and priod there will be
>> > interpreted as "for the whole domain", and applied to all the domain's
>> > vcpus. It's in this sense that I see it an either/or:
>> >  - at domain creation time, if the user does not say anything we'll use
>> >    the domain-wide API for setting a default budget and period for all
>> >    the vcpus.
>> >  - if the user does say something in the config file (once this will be
>> >    possible), or upon request, via the proper xl command, to modify the
>> >    parameters of vcpu X, we'll use the vcpu-wide API.
>>
>> The simplest (and IMHO least surprising) thing would be to have the
>> per-vcpu ones override the per-domain ones.
>>
> I agree.

Sorry, I'm not so sure I understand the term "to have the per-vcpu
ones override the per-domain ones."
My understanding is:
At domain creation time, the default option is to use the per-domain
implementation (in libxl, libxc, and xen) to set the default budget
and period for all VCPUs.
If user specify the per-vcpu parameters or wants to change the
parameter with xl command after the domain is created, they can use
the vcpu-wide api to change it.

Am I correct?

>
>> So at build time you would first call the per-domain set function with
>> whatever parameters were provided, followed by setting any per-vcpu
>> options which are configured.
>>
> Yep.
>
>> At run time via xl commands I guess they would be separate commands for
>> domains vs vcpus,
>>
> Yes, either there will be separate commands, or the same command will
> have a way of specifying whether the parameters are to be considered
> domain-wide, or just apply to one (or more) vcpus.

So let me try to summarize: :-)

We are going to implement both per-domain and per-vcpu get/set method.

In hypervisor and libxc: we will use
XEN_DOMCTL_SCHEDOP_getinfo/setinfo as the hypercall for per-domain
operation(i.e., get/set method); We will introduce new hypercall, say
XEN_VCPUCTL_SCHEDOP_getinfo/setinfo, for per-vcpu operation.

In xl: we will support both per-domain and per-vcpu operation.
The semantics of xl  per-domain operation will be: xl sched-rtds -d 4
-p 10000 -b 5000 --all, which set all vcpus of domain 4 to have period
10ms and budget 5ms;
The semantics of xl per-vcpu operation will be what Dario describes as follows:

>     # xl sched-rtds -d 4 -p 10000 -b 5000
>      |
>      ---> sets all the vcpus of domain 4 to 5000/10000
>
>     # xl sched-rtds -d 3 -v 2 -p 5000 -b 2000
>      |
>      ---> set vcpu 2 of domain 4 to 2000/5000
>
>     # xl sched-rtds -d 2 -v 0 -p 10000 -b 1000 -v 1 -p 5000 -b 2500
>      |
>      ---> set vcpu 0 of domain 2 to 1000/10000 and vcpu 1 of
>           domain 2 to 2500/5000
>

In libxl: we will support both per-domain and per-vcpu operations. The
per-domain operations have been supported in the current Xen. So we
will add another two libraries for the per-vcpu get/set operations.
(Right now, the per-vcpu get/set operation in libxl will also use the
array-based implementation.)

As to the data structure, we will use the union structure to hold the
data of the RTDS per-vcpu parameters, so that when other schedulers
want to support per-vcpu operation, they can add the specific data
structure in the union structure.

Does the above summary make sense? :-)

BTW, in the array that hold all VCPUs' parameters, we don't need the
vcpu index because it's implicitly encoded as the index of the array
element.

Thank you very much for your advice!

Best,

Meng

-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.