[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH V3 06/22] Define extended event channel registration interface



On 27/02/13 14:33, Wei Liu wrote:
> This interface allows user to query supported event channel types. If we need
> to disable a specific ABI in the future, we just need to remove the dead code
> and clear corresponding feature bit.
> 
> Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
> ---
>  xen/include/public/event_channel.h |   44 
> ++++++++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/xen/include/public/event_channel.h 
> b/xen/include/public/event_channel.h
> index 07ff321..dff3364 100644
> --- a/xen/include/public/event_channel.h
> +++ b/xen/include/public/event_channel.h
> @@ -71,6 +71,7 @@
>  #define EVTCHNOP_bind_vcpu        8
>  #define EVTCHNOP_unmask           9
>  #define EVTCHNOP_reset           10
> +#define EVTCHNOP_register_extended 11
>  /* ` } */
>  
>  typedef uint32_t evtchn_port_t;
> @@ -258,6 +259,49 @@ struct evtchn_reset {
>  typedef struct evtchn_reset evtchn_reset_t;
>  
>  /*
> + * EVTCHNOP_register_extended: Register extended event channel
> + * NOTES:
> + *  1. Currently only 3-level is supported.
> + *  2. Should fall back to 2-level if this call fails.
> + */
> +/* 64 bit guests need 8 pages for evtchn_pending and evtchn_mask for
> + * 256k event channels while 32 bit ones only need 1 page for 32k
> + * event channels. */
> +#define EVTCHN_MAX_L3_PAGES 8
> +struct evtchn_register_3level {
> +    /* IN parameters. */
> +    uint32_t nr_pages;          /* for evtchn_{pending,mask} */
> +    uint32_t nr_vcpus;          /* for l2sel_{mfns,offsets} */
> +    XEN_GUEST_HANDLE(xen_pfn_t) evtchn_pending;
> +    XEN_GUEST_HANDLE(xen_pfn_t) evtchn_mask;
> +    XEN_GUEST_HANDLE(xen_pfn_t) l2sel_mfns;
> +    XEN_GUEST_HANDLE(xen_pfn_t) l2sel_offsets;
> +};

Registering per-VCPU resources at start-of-day doesn't seem like the
right thing to do here.  Why waste resources for VCPUs that are never
going to be used?  And I don't think we want to constraint how VCPU
hotplug works by requiring resource for all possible VCPUs to be
allocated up-front.

If there isn't enough resource for all VCPUs to all use the 3-level ABI
then I think the preferred trade off is to offline the VCPUs that cannot
get resources and not fallback to the 2-level ABI.

The event channel limit is a hard scalability limit, number of VCPUs is
a soft limit, so a guest is more likely to gracefully degrade in
usefulness if it loses VCPUs instead of available event channels.

Obviously, if 3-level is requested and the boot VCPUs fails to enable
it, then it should fallback to 2-level because this is better than just
panicking.

> +typedef struct evtchn_register_3level evtchn_register_3level_t;
> +DEFINE_XEN_GUEST_HANDLE(evtchn_register_3level_t);
> +
> +/* commands:
> + * EVTCHN_EXTENDED_QUERY(0): query supported extended event channel types,
> + *                _NONE      supported types are or'ed in return value of
> + *                           the hypercall.
> + * EVTCHN_EXTENDED_*:        specific extended event channel subcommand.

Combining query and register makes no sense.

> + */
> +#define EVTCHN_EXTENDED_QUERY 0
> +/* supported extended event channel */
> +#define EVTCHN_EXTENDED_NONE  0
> +#define _EVTCHN_EXTENDED_L3   0
> +#define EVTCHN_EXTENDED_L3    (1U << _EVTCHN_EXTENDED_L3)
> +struct evtchn_register_extended {
> +    /* IN parameters. */
> +    uint32_t cmd;
> +    union {
> +        evtchn_register_3level_t l3;
> +    } u;
> +};

Adding new members to this union as new event channels ABI are
implemented are going to change its size and potentially the alignment
of the union member.  This seems a potential for future ABI
compatibility problems.  See also by comment to patch 18.

There doesn't seem to be any value in having a single register sub-op
for all possible future ABIs.  Just add a new sub-op for each new ABI.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.