|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 1/7] xen: vNUMA support for PV guests
>>> On 04.12.13 at 06:47, Elena Ufimtseva <ufimtseva@xxxxxxxxx> wrote:
> +/*
> + * vNUMA topology specifies vNUMA node
> + * number, distance table, memory ranges and
> + * vcpu mapping provided for guests.
> + */
> +
> +struct vnuma_topology_info {
> + /* IN */
> + domid_t domid;
> + /* OUT */
> + union {
> + XEN_GUEST_HANDLE(uint) h;
> + uint64_t _pad;
> + } nr_vnodes;
> + union {
> + XEN_GUEST_HANDLE(uint) h;
> + uint64_t _pad;
> + } nr_vcpus;
> + union {
> + XEN_GUEST_HANDLE(uint) h;
> + uint64_t _pad;
> + } vdistance;
> + union {
> + XEN_GUEST_HANDLE(uint) h;
> + uint64_t _pad;
> + } vcpu_to_vnode;
> + union {
> + XEN_GUEST_HANDLE(vmemrange_t) h;
> + uint64_t _pad;
> + } vmemrange;
> +};
As said before - the use of a separate sub-hypercall here is
pointlessly complicating things.
Furthermore I fail to see why nr_vnodes and nr_vcpus need
to be guest handles - they can be simple integer fields, and
_both_ must be inputs to XENMEM_get_vnuma_info (otherwise,
if you - as done currently - use d->max_vcpus, there's no
guarantee that this value didn't increase between retrieving
the count and obtaining the full info.
Once again: The boundaries of _any_ arrays you pass in to
hypercalls must be specified by further information passed into
the same hypercall, with the sole exception of cases where
there is a priori, immutable information on this available through
other mechanisms.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |