[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 01/10] xen: vnuma topology and subop hypercalls



>>> On 25.07.14 at 06:52, <ufimtseva@xxxxxxxxx> wrote:
> On Wed, Jul 23, 2014 at 10:06 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>> On 18.07.14 at 07:50, <ufimtseva@xxxxxxxxx> wrote:
>>> +static int vnuma_alloc(struct vnuma_info **vnuma,
>>> +                       unsigned int nr_vnodes,
>>> +                       unsigned int nr_vcpus,
>>> +                       unsigned int dist_size)
>>> +{
>>> +    struct vnuma_info *v;
>>> +
>>> +    if ( vnuma && *vnuma )
>>> +        return -EINVAL;
>>> +
>>> +    v = *vnuma;
>>> +    /*
>>> +     * check if any of xmallocs exeeds PAGE_SIZE.
>>> +     * If yes, consider it as an error for now.
>>> +     */
>>> +    if ( nr_vnodes > PAGE_SIZE / sizeof(nr_vnodes)       ||
>>> +        nr_vcpus > PAGE_SIZE / sizeof(nr_vcpus)          ||
>>> +        nr_vnodes > PAGE_SIZE / sizeof(struct vmemrange) ||
>>> +        dist_size > PAGE_SIZE / sizeof(dist_size) )
>>
>> Three of the four checks are rather bogus - the types of the
>> variables just happen to match the types of the respective
>> array elements. Best to switch all of them to sizeof(*v->...).
>> Plus I'm not sure about the dist_size check - in its current shape
>> it's redundant with the nr_vnodes one (and really the function
>> parameter seems pointless, i.e. could be calculated here), and
>> it's questionable whether limiting that table against PAGE_SIZE
>> isn't too restrictive. Also indentation seems broken here.
> 
> I agree on distance table memory allocation limit.
> The max vdistance (in current interface) table dimension will be
> effectively 256 after nr_vnodes size check,
> so the max vNUMA nodes. Thus vdistance table will need to allocate 2
> pages of 4K size.
> Will be that viewed as a potential candidate to a list of affected
> hypercalls in XSA-77?

That list isn't permitted to be extended, so the multi-page allocation
needs to be avoided. And a 2-page allocation wouldn't mean a
security problem (after all the allocation still has a deterministic upper
bound) - it's a functionality one. Just allocate separate pages and
vmap() them.

>>> +
>>> +        /*
>>> +         * guest passes nr_vnodes and nr_vcpus thus
>>> +         * we know how much memory guest has allocated.
>>> +         */
>>> +        if ( copy_from_guest(&topology, arg, 1) ||
>>> +            guest_handle_is_null(topology.vmemrange.h) ||
>>> +            guest_handle_is_null(topology.vdistance.h) ||
>>> +            guest_handle_is_null(topology.vcpu_to_vnode.h) )
>>> +            return -EFAULT;
>>> +
>>> +        if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
>>> +            return -ESRCH;
>>> +
>>> +        rc = -EOPNOTSUPP;
>>> +        if ( d->vnuma == NULL )
>>> +            goto vnumainfo_out;
>>> +
>>> +        if ( d->vnuma->nr_vnodes == 0 )
>>> +            goto vnumainfo_out;
>>
>> Can this second condition validly (other than due to a race) be true if
>> the first one wasn't? (And of course there's synchronization missing
>> here, to avoid the race.)
> 
> My idea of using pair domain_lock and rcu_lock_domain_by_any_id was to
> avoid that race.

rcu_lock_domain_by_any_id() only guarantees the domain to not
go away under your feet. It means nothing towards a racing
update of d->vnuma.

> I used domain_lock in domctl hypercall when the pointer to vnuma of a
> domain is being set.

Right, but only protecting the writer side isn't providing any
synchronization.

> XENMEM_get_vnumainfo reads the values and hold the reader lock of that 
> domain.
> As setting vnuma happens once on booting domain,  domain_lock seemed
> to be ok here.
> Would be a spinlock more appropriate here?

Or an rw lock (if no lockless mechanism can be found).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.