[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/3] xen/pvh: check permissions when adding MMIO regions



On 21/01/15 10:08, Roger Pau Monnà wrote:
> El 20/01/15 a les 19.19, Andrew Cooper ha escrit:
>> On 20/01/15 17:05, Roger Pau Monne wrote:
>>> Check that MMIO regions added to PVH Dom0 are allowed. Previously a PVH Dom0
>>> would have access to the full MMIO range.
>>>
>>> Signed-off-by: Roger Pau Monnà <roger.pau@xxxxxxxxxx>
>>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
>>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>> ---
>>> Changes since v1:
>>>  - Use the newly introduced p2m_access_t to set the access type.
>>>  - Don't add a next label.
>>> ---
>>>  xen/arch/x86/domain_build.c | 17 +++++++++++++++--
>>>  1 file changed, 15 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/domain_build.c b/xen/arch/x86/domain_build.c
>>> index f687c78..41d2541 100644
>>> --- a/xen/arch/x86/domain_build.c
>>> +++ b/xen/arch/x86/domain_build.c
>>> @@ -319,12 +319,25 @@ static __init void pvh_add_mem_mapping(struct domain 
>>> *d, unsigned long gfn,
>>>                                         unsigned long mfn, unsigned long 
>>> nr_mfns)
>>>  {
>>>      unsigned long i;
>>> +    mfn_t omfn;
>>> +    p2m_type_t t;
>>> +    p2m_access_t a;
>>>      int rc;
>>>  
>>>      for ( i = 0; i < nr_mfns; i++ )
>>>      {
>>> -        if ( (rc = set_mmio_p2m_entry(d, gfn + i, _mfn(mfn + i),
>>> -                                      p2m_get_hostp2m(d)->default_access)) 
>>> )
>>> +        if ( !iomem_access_permitted(d, mfn + i, mfn + i) ) {
>>> +            omfn = get_gfn_query_unlocked(d, gfn + i, &t);
>>> +            guest_physmap_remove_page(d, gfn + i, mfn_x(omfn), 
>>> PAGE_ORDER_4K);
>>> +            continue;
>>> +        }
>> This suggests a design flaw (possibly pre-existing).  We should not be
>> removing physmap entries in pvh_add_mem_mapping(), nor should we be a
>> position to need to revoke physmap entries during domain build.
>>
>> If there is anything needing revoking at this stage, it should not have
>> been added earlier.  How did you come to introduce this code?
> This code was introduced with the PVH Dom0 support series done by
> Mukesh. Basically we let construct_dom0 build the physmap as it would be
> done for a PV Dom0 (no holes at all, plain physmap from 0 to maxmem) and
> then we punch the MMIO holes as needed. After punching the holes, we add
> the leftover memory to the end of the memory map.
>
> IMHO this seems better than having two different ways of building the
> Dom0 memory map interleaved in the code, one for PV and one for PVH,
> specially taking into account that the code in construct_dom0 is already
> quite convoluted.

PV domains do not have physmaps; physmaps are an HVM construct. 
Attempting to pretend that PV and PVH are the same when it comes to
memory setup like this is a misdesign at best.

Looking through the code, the memory setup for PVH appears completely
backwards, and should be fixed properly rather than having yet another
hack placed on top.

By (a very rushed) look of the code, it currently does
* Construct PV pagetables
* Construct plain p2m from 0 to dom0 max ram
* Wander over PV pagetables translating mfns to pfns
* Extra misc p2m changes including making holes, shifting ram, changing
permissions

A more sensible setup would be:
* Start constructing the p2m with the identity and and read-only areas
* Insert regular pages from 0 to max, skipping over existing areas
* Extract the kernel and initrd into p2m (the domain builder can
certainly do this)
* Set up identity pagetables covering the kernel and initrd

This way, there is no going back to undo something which was done
incorrectly earlier during build.  I presume I have over simplified some
areas, but I hope I have managed to get my point across.

~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.