[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] xen: allow guest_remove_page to remove p2m_mmio_direct pages



On 19/06/14 13:08, Tim Deegan wrote:
> At 14:36 +0100 on 18 Jun (1403098594), Jan Beulich wrote:
>>>>> On 18.06.14 at 13:38, <roger.pau@xxxxxxxxxx> wrote:
>>> IF a guest tries to do a foreign/grant mapping in a memory region
>>> marked as p2m_mmio_direct Xen will complain with the following
>>> message:
>>>
>>> (XEN) memory.c:241:d0v0 Bad page free for domain 0
>>>
>>> Albeit the mapping will succeed. This is specially problematic for PVH
>>> Dom0, in which we map all the e820 holes and memory up to 4GB as
>>> p2m_mmio_direct.
>>>
>>> In order to deal with it, add a special casing for p2m_mmio_direct
>>> regions in guest_remove_page if the domain is a hardware domain, that
>>> calls clear_mmio_p2m_entry in order to remove the mappings.
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
>>> Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
>>> Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
>>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>>> Cc: Keir Fraser <keir@xxxxxxx>
>>> Cc: Tim Deegan <tim@xxxxxxx>
>>> ---
>>> I've spoken with Jan about this issue on IRC, and he felt that adding
>>> such a special casing in guest_remove_page is not the right thing to
>>> do, I could look into two other approaches at least:
>>>
>>>  - Make XENMEM_remove_from_physmap able to remove p2m_mmio_direct
>>>    ranges, right now it doesn't work with those ranges because
>>>    get_page_from_gfn fails when used on p2m_mmio_direct p2m entries.
>>>
>>>  - Extend HVMOP_set_mem_type to be able to perform the conversion from
>>>    p2m_mmio_direct to p2m_invalid and vice versa on Dom0 request.
>>>
>>> I have to admit I don't like any of these two options, because it
>>> would mean we will need to perform another hypercall before mapping,
>>> which will make it even slower.
>>
>> Looking at where and how you do the change you prefer makes
>> me even more certain this isn't the right thing.
>>
>>> --- a/xen/common/memory.c
>>> +++ b/xen/common/memory.c
>>> @@ -204,6 +204,12 @@ int guest_remove_page(struct domain *d, unsigned long 
>>> gmfn)
>>>          p2m_mem_paging_drop_page(d, gmfn, p2mt);
>>>          return 1;
>>>      }
>>> +    if ( is_hardware_domain(d) && p2mt == p2m_mmio_direct )
>>> +    {
>>> +        clear_mmio_p2m_entry(d, gmfn);
>>> +        put_gfn(d, gmfn);
>>> +        return 1;
>>> +    }
>>>  #else
>>>      mfn = gmfn_to_mfn(d, gmfn);
>>>  #endif
>>
>> First of all, the #ifdef CONFIG_X86 block you add this to is wrongly
>> tagged as x86 - it really is specific to whether the architecture
>> supports paging. As a consequence, this would have to go into a
>> separate similar conditional, and I guess you then won't completely
>> disagree that these x86-special code portions in a supposedly
>> generic functions feel wrong.
>>
>> And then you blindly call clear_mmio_p2m_entry() for all
>> p2m_mmio_direct entries - I'd suggest to minimally restrict this to
>> such where the MFN in INVALID_MFN, such that real MMIO ranges
>> would be kept alive.
> 
> I think it's OK to remove whatever's found here, in a function called
> 'guest_remove_page()'.  So from that point of view I'm happy with this
> patch.
> 
>> Finally, how is this same situation going to be handled for PVH driver
>> domains or disaggregated PVH control domains? The
>> is_hardware_domain() here seems too much of a special case too...
> 
> Agreed -- that is_hardware_domain() should be removed.  And with that
> removed, this patch has my ack.

Ok, I'm going to remove that and send v2. Thanks for the feedback.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.