[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1 of 2] x86/mm: Allow a page in p2m_ram_paged_out state to be loaded



I think top-posting is frowned upon. Below...
>     I think it may have many unpredicted risks.
>     After p2mt is changed to p2m_ram_rw, Domain guest can access this page
> unrestrictedly without being trapped in xen.
>  But at this time, the page is not prepared.

Nope. The page has already been allocated and paged-in (copy_from_user out
of user_ptr) by the time the p2mt is changed

Andres
>
>> -----Original Message-----
>> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Andres
>> Lagar-Cavilla
>> Sent: Tuesday, January 10, 2012 5:41 AM
>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Cc: andres@xxxxxxxxxxxxxx; tim@xxxxxxx; olaf@xxxxxxxxx;
>> adin@xxxxxxxxxxxxxx
>> Subject: [Xen-devel] [PATCH 1 of 2] x86/mm: Allow a page in
>> p2m_ram_paged_out state to be loaded
>>
>>  xen/arch/x86/mm/p2m.c |  15 +++++++++++----
>>  1 files changed, 11 insertions(+), 4 deletions(-)
>>
>>
>> This removes the need for a page to be accessed in order to be pageable
>> again. A pager can now page-in pages at will with no need to map them
>> in a separate thread.
>>
>> Signed-off-by: Andres Lagar-Cavilla <andres@xxxxxxxxxxxxxxxx>
>> Acked-by: Tim Deegan <tim@xxxxxxx>
>>
>> diff -r 90f764bf02c3 -r f7c330d5b4b5 xen/arch/x86/mm/p2m.c
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -964,7 +964,7 @@ void p2m_mem_paging_populate(struct doma
>>  int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t
>> buffer)
>>  {
>>      struct page_info *page;
>> -    p2m_type_t p2mt;
>> +    p2m_type_t p2mt, target_p2mt;
>>      p2m_access_t a;
>>      mfn_t mfn;
>>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> @@ -982,8 +982,8 @@ int p2m_mem_paging_prep(struct domain *d
>>      mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, p2m_query, NULL);
>>
>>      ret = -ENOENT;
>> -    /* Allow only missing pages */
>> -    if ( p2mt != p2m_ram_paging_in_start )
>> +    /* Allow missing pages */
>> +    if ( (p2mt != p2m_ram_paging_in_start) && (p2mt != p2m_ram_paged) )
>>          goto out;
>>
>>      /* Allocate a page if the gfn does not have one yet */
>> @@ -1018,8 +1018,15 @@ int p2m_mem_paging_prep(struct domain *d
>>          }
>>      }
>>
>> +    target_p2mt = (p2mt == p2m_ram_paging_in_start) ?
>> +        /* If we kicked the pager with a populate event, the pager will
>> send
>> +         * a resume event back */
>> +        p2m_ram_paging_in :
>> +        /* If this was called asynchronously by the pager, then we can
>> +         * transition directly to the final guest-accessible type */
>> +        (paging_mode_log_dirty(d) ? p2m_ram_logdirty : p2m_ram_rw);
>>      /* Fix p2m mapping */
>> -    set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, p2m_ram_paging_in, a);
>> +    set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K, target_p2mt, a);
>>
>>      atomic_dec(&d->paged_pages);
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.