[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v1 4/4] libxc for rt scheduler



Hi Dario,


2014-07-11 10:49 GMT-04:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>:
On ven, 2014-07-11 at 00:49 -0400, Meng Xu wrote:
> Add xc_sched_rt_* functions to interact with Xen to set/get domain's

> --- /dev/null
> +++ b/tools/libxc/xc_rt.c

> +#include "xc_private.h"
> +
> +int
> +xc_sched_rt_domain_set(
> + Â Âxc_interface *xch,
> + Â Âuint32_t domid,
> + Â Âstruct xen_domctl_sched_rt_params *sdom)
>
So, both here...

> +{
> + Â Âint rc;
> + Â ÂDECLARE_DOMCTL;
> + Â ÂDECLARE_HYPERCALL_BOUNCE(sdom,
> + Â Â Â Âsizeof(*sdom),
> + Â Â Â ÂXC_HYPERCALL_BUFFER_BOUNCE_IN);
> +
... and here... I know libxc is not terribly consistent when it comes to
coding style. Still, I like stuff to be aligned to the opening brace of
the function/macro.

Have a look at xc_domain.c, e.g.,

...
int xc_vcpu_setaffinity(xc_interface *xch,
            uint32_t domid,
            int vcpu,
            xc_cpumap_t cpumap_hard_inout,
            xc_cpumap_t cpumap_soft_inout,
            uint32_t flags)
{
  DECLARE_DOMCTL;
  DECLARE_HYPERCALL_BOUNCE(cpumap_hard_inout, 0,
              ÂXC_HYPERCALL_BUFFER_BOUNCE_BOTH);
  DECLARE_HYPERCALL_BOUNCE(cpumap_soft_inout, 0,
              ÂXC_HYPERCALL_BUFFER_BOUNCE_BOTH);
...

âIt's so nice of you to giving me such an example! I will modify them in the next version.

â

> + Â Âif ( xc_hypercall_bounce_pre(xch, sdom) )
> + Â Â Â Âreturn -1;
> +
> + Â Âdomctl.cmd = XEN_DOMCTL_scheduler_op;
> + Â Âdomctl.domain = (domid_t) domid;
> + Â Âdomctl.u.scheduler_op.sched_id = XEN_SCHEDULER_RT;
> + Â Âdomctl.u.scheduler_op.cmd = XEN_DOMCTL_SCHEDOP_putinfo;
> + Â Âset_xen_guest_handle(domctl.u.scheduler_op.u.rt.schedule, sdom);
> +
> + Â Ârc = do_domctl(xch, &domctl);
> +
> + Â Âxc_hypercall_bounce_post(xch, sdom);
> +
So, the bouncing logic seems fine. Looking at what other schedulers do,
there should be no particular need for bouncing anything.

xen/include/public/domctl.h:

struct xen_domctl_scheduler_op {
  uint32_t sched_id; Â/* XEN_SCHEDULER_* */
  uint32_t cmd;    /* XEN_DOMCTL_SCHEDOP_* */
  union {
    struct xen_domctl_sched_sedf {
      uint64_aligned_t period;
      uint64_aligned_t slice;
      uint64_aligned_t latency;
      uint32_t extratime;
      uint32_t weight;
    } sedf;
    struct xen_domctl_sched_credit {
      uint16_t weight;
      uint16_t cap;
    } credit;
    struct xen_domctl_sched_credit2 {
      uint16_t weight;
    } credit2;
  } u;
};

I see you have a TODO item relate to this interface, so I guess I'm
leaving this alone.

My view here is, since per-VCPU scheduling parameters are important for
this scheduler, you either:
Â- provide an interface for getting and setting the parameters of _one_
 Â_VCPU_, and the call them repeatedly, if for instance you need to act
 Âon a domain. In this case, no array and no bouncing should be
 Ânecessary.
Â- provide both the above, and another interface, that one with an array
 Âas big as the number of VCPUs of the domain, and use it to provide
 Âthe hypervisor with the parameters of all the VCPUs, and act
 Âappropriately inside Xen.

Personally, I'd start with the former, and add the latter later, if we
deem it necessary.

Actually, we did the former one in the very early version of the rt scheduler. The only concern I have about the former version is that it has to issue the hypercall for many times to get all VCPUs' information of a domain, while the later one only issue the hypercall once with bouncing.Â

My concern is: Will the former one has worse performance than the later one? Ââ

Thanks,

Meng


Â

Regards,
Dario

--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)




--


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.