[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 4/7] x86: collect CQM information from all sockets



On 28/01/14 14:23, Xu, Dongxiao wrote:
>
>>> +    case XEN_SYSCTL_getcqminfo:
>>> +    {
>>> +        struct xen_socket_cqmdata *info;
>>> +        uint32_t num_sockets;
>>> +        uint32_t num_rmids;
>>> +        cpumask_t cpu_cqmdata_map;
>> Unless absolutely avoidable, not CPU masks on the stack please.
> Okay, will allocate it by "xzalloc" function.

cpumask_var_t mask;
{z}alloc_cpumask_var(&mask);
...
free_cpumask_var(mask);

This will switch between a long on the stack and an allocated structure
depending on whether Xen is compiled with fewer or more than 64 pcpu.

>
>>> +
>>> +        if ( !system_supports_cqm() )
>>> +        {
>>> +            ret = -ENODEV;
>>> +            break;
>>> +        }
>>> +
>>> +        select_socket_cpu(&cpu_cqmdata_map);
>>> +
>>> +        num_sockets = min((unsigned
>> int)cpumask_weight(&cpu_cqmdata_map) + 1,
>>> +                          sysctl->u.getcqminfo.num_sockets);
>>> +        num_rmids = get_cqm_count();
>>> +        info = xzalloc_array(struct xen_socket_cqmdata,
>>> +                             num_rmids * num_sockets);
>> While unlikely right now, you ought to consider the case of this
>> multiplication overflowing.
>>
>> Also - how does the caller know how big the buffer needs to be?
>> Only num_sockets can be restricted by it...
>>
>> And what's worse - you allow the caller to limit num_sockets and
>> allocate info based on this limited value, but you don't restrict
>> cpu_cqmdata_map to just the socket covered, i.e. if the caller
>> specified a lower number, then you'll corrupt memory.
> Currently the caller (libxc) sets big num_rmid and num_sockets which are the 
> maximum values that could be available inside the hypervisor.
> If you think this approach is not enough to ensure the security, what about 
> the caller (libxc) issue an hypercall to get the two values from hypervisor, 
> then allocating the same size of num_rmid and num_sockets?
>
>> And finally, I think the total size of the buffer here can easily
>> exceed a page, i.e. this then ends up being a non-order-0
>> allocation, which may _never_ succeed (i.e. the operation is
>> then rendered useless). I guest it'd be better to e.g. vmap()
>> the MFNs underlying the guest buffer.
> Do you mean we check the size of the total size, and allocate MFNs one by 
> one, then vmap them?

I still think this is barking mad as a method of getting this quantity
of data from Xen to the toolstack in a repeated fashon.

Xen should allocate a per-socket buffer at the start of day (or perhaps
on first use of CQM), and the CQM monitoring tool gets to map those
per-socket buffers read-only.

This way, all processing of the CQM data happens in dom0 userspace, not
in Xen in hypercall context; All Xen has to do is periodically dump the
MSRs into the pages.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.