[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] [Design] Better XL support of RTDS scheduler for Xen 4.6



Hi Dario,

In summary, I agree with you and will apply these comments discussed
in this thread to work out a RFC patch for it. Below is just some
reply to your question. :-)

2015-02-25 3:44 GMT-05:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>:
> On Tue, 2015-02-24 at 21:42 -0500, Meng Xu wrote:
>> Hi all,
>>
> Hello,
>
>> 2015-02-24 11:35 GMT-05:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>:
>> > On Tue, 2015-02-24 at 14:16 +0000, Ian Campbell wrote:
>> >> On Tue, 2015-02-24 at 11:30 +0000, Dario Faggioli wrote:
>> >
>> >> > Semantically speaking, they just should be killed. OTOH, what I was
>> >> > suggesting was this: if one calls libxl_domain_sched_params_set(), which
>> >> > takes a libxl_domain_sched_params, the budget and priod there will be
>> >> > interpreted as "for the whole domain", and applied to all the domain's
>> >> > vcpus. It's in this sense that I see it an either/or:
>> >> >  - at domain creation time, if the user does not say anything we'll use
>> >> >    the domain-wide API for setting a default budget and period for all
>> >> >    the vcpus.
>> >> >  - if the user does say something in the config file (once this will be
>> >> >    possible), or upon request, via the proper xl command, to modify the
>> >> >    parameters of vcpu X, we'll use the vcpu-wide API.
>> >>
>> >> The simplest (and IMHO least surprising) thing would be to have the
>> >> per-vcpu ones override the per-domain ones.
>> >>
>> > I agree.
>>
>> Sorry, I'm not so sure I understand the term "to have the per-vcpu
>> ones override the per-domain ones."
>>
> It means that if one calls libxl_domain_sched_params_set(), with a
> struct libxl_domain_sched_params as argument --which contains one single
> budget value and one single period value-- we will apply those budget
> and period to _all_ the vcpus of the domain.
>
>> My understanding is:
>> At domain creation time, the default option is to use the per-domain
>> implementation (in libxl, libxc, and xen) to set the default budget
>> and period for all VCPUs.
>>
> This is an implementation detail, while we're discussing the look and
> the semantic of the interface. However, yes, it is very likely that
> we'll be calling libxl_domain_sched_params_set(), with some default
> budget and period values, at early stage during domain creation, to set
> default values, in case the user does not specify anything.
>
>> If user specify the per-vcpu parameters or wants to change the
>> parameter with xl command after the domain is created, they can use
>> the vcpu-wide api to change it.
>>
>> Am I correct?
>>
> Well, they can use both, can't they? If they use the domain-wide API,
> that will set the parameters for all the vcpus, as said above. It they
> use the per-vcpu API, that will set the parameters for the vcpus
> specified in the arguments.
>
> Does this make sense?
>
>> So let me try to summarize: :-)
>>
>> We are going to implement both per-domain and per-vcpu get/set method.
>>
> The per-domain is already there, the per-vcpu, yes, we have to implement
> it.
>
>> In hypervisor and libxc: we will use
>> XEN_DOMCTL_SCHEDOP_getinfo/setinfo as the hypercall for per-domain
>> operation(i.e., get/set method); We will introduce new hypercall, say
>> XEN_VCPUCTL_SCHEDOP_getinfo/setinfo, for per-vcpu operation.
>>
> All the above is a lot more focused on libxl API than on the Xen one,
> and they not necessarily have to match. That being said, yes, my opinion
> would be to do something similar to the above at the hypercall level
> too, i.e., to leave XEN_DOMCTL_SCHEDOP_{get,put}info alone, and add new
> hypercalls for per-vcpu parameters manipulation.
>
> These new hypercalls must be designed in such a way that, although
> they'll be used by RTDS only for now, it will be possible for other
> schedulers to use them in future. This is less critical than in libxl,
> as this is not a stable interface, but if it's simple enough (as I think
> it is), I see few reasons why not to do so.
>
> About the name, I'd go for something like
> XEN_DOMCTL_SCHEDOP_{get,put}vcpuinfo, i.e., let's retain the
> XEN_DOMCTL_SCHEDOP_ prefix, and tell what specific operation is
> afterwards, in the lowercase part of the name.
>
>> In xl: we will support both per-domain and per-vcpu operation.
>> The semantics of xl  per-domain operation will be: xl sched-rtds -d 4
>> -p 10000 -b 5000 --all, which set all vcpus of domain 4 to have period
>> 10ms and budget 5ms;
>> The semantics of xl per-vcpu operation will be what Dario describes as 
>> follows:
>>
>> >     # xl sched-rtds -d 4 -p 10000 -b 5000
>> >      |
>> >      ---> sets all the vcpus of domain 4 to 5000/10000
>> >
>> >     # xl sched-rtds -d 3 -v 2 -p 5000 -b 2000
>> >      |
>> >      ---> set vcpu 2 of domain 4 to 2000/5000
>> >
>> >     # xl sched-rtds -d 2 -v 0 -p 10000 -b 1000 -v 3 -p 5000 -b 2500
>> >      |
>> >      ---> set vcpu 0 of domain 2 to 1000/10000 and vcpu 3 of
>> >           domain 2 to 2500/5000
>> >
>>
> Yes. The third variant can come as a future, incremental addition, it
> does not have to be there in the first submission.
>
>> In libxl: we will support both per-domain and per-vcpu operations. The
>> per-domain operations have been supported in the current Xen. So we
>> will add another two libraries for the per-vcpu get/set operations.
>> (Right now, the per-vcpu get/set operation in libxl will also use the
>> array-based implementation.)
>>
> Yes. Only thing I'm not sure is, what do you mean with 'right now'?

Ah, 'Right now' means the current design is blabla... ;-)

>
>> As to the data structure, we will use the union structure to hold the
>> data of the RTDS per-vcpu parameters, so that when other schedulers
>> want to support per-vcpu operation, they can add the specific data
>> structure in the union structure.
>>
> Exactly.
>
>> BTW, in the array that hold all VCPUs' parameters, we don't need the
>> vcpu index because it's implicitly encoded as the index of the array
>> element.
>>
> Well, actually, thinking more about this, taking Wei's and Ian's
> comments into account, I think the index could be useful.
>
> In fact, the best option is probably to pass around arrays as big as the
> number of vcpus we want to modify the parameters of. That means we need
> a 'number_of_elements' field in the struct, next to the array itself
> (this is easily done in libxl_types.idl, and everywhere else), and a
> vcpu index, within each array element, to figure out what vcpu that
> element is about.
>
> So, in the above example:
>
>     # xl sched-rtds -d 4 -p 10000 -b 5000
>       |
>       ---> this will use the domain-wide API, so no array involved
>
>     # xl sched-rtds -d 3 -v 2 -p 5000 -b 2000
>       |
>       ---> this will use the per-vcpu API. The array will only have one
>            element, relative to vcpu 2:
>             num_vcpu_params = 1
>             vcpu_params[0].index = 2
>             vcpu_params[0].period = 10000
>             vcpu_params[0].budget = 5000
>
>     # xl sched-rtds -d 2 -v 0 -p 10000 -b 1000 -v 3 -p 5000 -b 2500
>       |
>       ---> this will use the per-vcpu API. The array will only have two
>            elements, relative to vcpu 0 and 1:
>             num_vcpu_params = 2
>             vcpu_params[0].index = 0
>             vcpu_params[0].period = 10000
>             vcpu_params[0].budget = 5000
>             vcpu_params[1].index = 3
>             vcpu_params[1].period = 10000
>             vcpu_params[1].budget = 5000
>
> This strategy still uses batching, but at the same time limits the
> amount of data passed around to the minimum.
>
> Would this be fine?

Sure! This will be perfectly fine! I did implement this before and
submitted a version before.
I will incorporate these comments here and sent out a RFC patch this
week or next week so that we can make it concrete. :-)

Thank you very much!

best,

Meng



-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.