[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V9 4/5] xen, libxc: Request page fault injection via libxc

On 08/28/2014 03:11 PM, Jan Beulich wrote:
>>>> On 28.08.14 at 14:08, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> On 08/28/2014 03:03 PM, Jan Beulich wrote:
>>>>>> On 28.08.14 at 13:48, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>>> +    case XEN_DOMCTL_request_pagefault:
>>>> +    {
>>>> +        unsigned int vcpu = op->u.vcpucontext.vcpu;
>>> So you're using two different structures of the union - how can
>>> that possibly work? You've got a 32-bi padding field, which you can
>>> easily use to indicate the desired vCPU. Apart from that I'm not
>>> seeing how your intended "any vCPU" is now getting handled.
>> Sorry about that, started from a copy / paste bug from another domctl
>> case. I'll add the vcpu field to our struct and use that.
>> Not sure I follow the second part of your comment. Assuming the union
>> problem is fixed, is this not what you meant about handling the page
>> fault injection VCPU-based vs domain-based?
> It is, but you said you want an "I don't care on which vCPU"
> special case. In fact with your previous explanations I could
> see you getting into trouble if on a large guest you'd have to
> wait for one particular CPU to get to carry out the desired
> swap-in.

I've double-checked what the memory introspection engine requires, and
(thank you for the comment!) it looks like indeed having the PF
injection info per-VCPU is not only suboptimal, but possibly useless for
our purpose (at least when using it with a particular VCPU - such as
VCPU 0), precisely for the reasons you've indicated: because VCPU 0
might actually never run code in the destination virtual space.

Having the PF injection information per-domain ensures that at least one
VCPU ends up triggering the actual page fault. To answer Kevin's
question, whether more than one VCPU ends up doing that occasionally is
not a problem from the memory introspection engine's point of view, and
AFAIK it should not be a problem for a guest OS.

So I've gotten a bit ahead of myself with the change in V9, and this is
why I've reverted back to the original behaviour in V10.

I do understand the preference for a VCPU-based mechanism from a
concurrency point of view, but that would simply potentially fail for
us, hence defeating the purpose of the patch. I'm also not sure how that
would be useful in the general case either, since the same problem that
applies to us would seem to apply to the general case as well.

Razvan Cojocaru

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.