[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/domctl: Adjust size calculations for XEN_DOMCTL_get{_ext_vcpucontext, vcpuextstate}



On 28/04/14 14:45, Jan Beulich wrote:
>>>> On 28.04.14 at 14:26, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 28/04/14 12:37, Jan Beulich wrote:
>>>>>> On 28.04.14 at 12:59, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> On 28/04/14 11:34, Jan Beulich wrote:
>>>>>>>> On 28.04.14 at 11:43, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>>>> XEN_DOMCTL_get_ext_vcpucontext suffers from the same issue but while 
>>>>>> trying 
>> to
>>>>>> fix that in similar way, I discovered that it had a genuine bug when 
>> returning
>>>>>> the count of MSRs to the toolstack.  When running the hypercall on an 
>>>>>> active
>>>>>> vcpu, the vcpu can arbitrarily alter the count returned to the toolstack 
>>>>>> by
>>>>>> clearing and setting relevant MSRs.
>>>>> Did you perhaps overlook the vcpu_pause() there?
>>>> There is a vcpu pause in the hypercall, so for the duration of the
>>>> hypercall the returned value will be consistent.
>>>>
>>>> However without the toolstack pausing the domain, issuing this hypercall
>>>> twice, first to get the size and second to get the data might still
>>>> result in -ENOBUFS if the vcpu suddenly writes non-0 values to the MSRs.
>>> And in what way is this different from e.g. XEN_DOMCTL_get_vcpuextstate?
>> As xcr0_accum is strictly increasing and only in a few possible steps,
>> the size returned can never decrease.  As it is context switch material,
>> the chances are very good that it will reach the maximum the guest
>> kernel is willing to use a long time before migration happens.
> Chances you say. But we need guarantees, or rely on the tool stack
> knowing to re-issue such requests upon certain kinds of failures (or
> accept that migration may not work occasionally, with a retry helping).

Yes - that is the fix I intend to use.  In the case of -EINVAL and size
is now larger, realloc the buffer to the new size and retry.

>
>>>>> I'm also not really in favor of forcing the tools to allocate memory
>>>>> for the array if in fact no MSRs are being used by the guest.
>>>> If there are no msrs to receive, then passing a NULL guest handle is
>>>> still fine.
>>> But the caller can't know whether the count was non-zero just because
>>> that's the theoretical maximum or because some MSR really is in use.
>> Why is that a problem?
> The problem is with the first half of your earlier reply: "If there are
> no msrs to receive ..." - the caller just can't tell this with your change
> in place.
>
>> If the toolstack wants to save any possible MSRs the guest is using,
>> then it is going to have to provide a buffer large enough for any
>> eventual number of MSRs.  In the case that the buffer is sufficiently
>> sized, Xen writes back msr_count with the number of MSRs written, so the
>> toolstack can detect when fewer MSRs have been written back.
> In the end all I want to be assured is that migration would fail at the
> sending side if there are MSRs that need transmitting.
>
> Jan
>

Ah I see.  Given the one sole caller in xc_domain_save(), I will add a
hunk in v2 which explicitly fails the migration if MSRs would need
transmitting, making this safe for the short period before proper MSR
transmission can be added.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.