[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 6/8]: Ballooning changes



On Thu, 2012-08-16 at 02:05 +0100, Mukesh Rathor wrote:
> ---
>  drivers/xen/balloon.c |   26 +++++++++++++++++++++-----
>  1 files changed, 21 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index 31ab82f..57960a1 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -358,10 +358,18 @@ static enum bp_state increase_reservation(unsigned long 
> nr_pages)
>               BUG_ON(!xen_feature(XENFEAT_auto_translated_physmap) &&
>                      phys_to_machine_mapping_valid(pfn));
>  
> -             set_phys_to_machine(pfn, frame_list[i]);
> -
> +             if (!xen_pvh_domain()) {
> +                     set_phys_to_machine(pfn, frame_list[i]);
> +             } else {
> +                     pte_t *ptep;
> +                     unsigned int level;
> +                     void *addr = __va(pfn << PAGE_SHIFT);
> +                     ptep = lookup_address((unsigned long)addr, &level);
> +                     set_pte(ptep, pfn_pte(pfn, PAGE_KERNEL));

Another thing I just tripped over on ARM which might be pertinent on x86
PVH too is that lowmem mappings on ARM are super 2M mappings, so trying
to clear the PTE here fails.

I was a bit wary of leaving stage 1 mappings in place for pfn's with no
backing mfn but this 2M mapping issue has caused me to revise that
opinion -- there's no need to worry about the stage 1 mapping still
being present as long as you guarantee you never touch the associated
virtual address, which the balloon driver should be able to do. The
other option is to shatter such mappings which is just too much pain to
contemplate.

Long story short I reckon you can drop this hunk (and associated similar
changes) and rely on the XENFEAT_auto_translated_physmap check inside
set_phys_to_machine to do the right thing.

I guess PVH currently suppresses superpage mappings on x86 (probably
inherited from PV) but undoing that might be something to investigate
for the future? It'll help perf I expect.

Ian.

> +             }
>               /* Link back into the page tables if not highmem. */
> -             if (xen_pv_domain() && !PageHighMem(page)) {
> +             if (xen_pv_domain() && !PageHighMem(page) &&
> +                 !xen_pvh_domain()) {
>                       int ret;
>                       ret = HYPERVISOR_update_va_mapping(
>                               (unsigned long)__va(pfn << PAGE_SHIFT),
> @@ -417,7 +425,14 @@ static enum bp_state decrease_reservation(unsigned long 
> nr_pages, gfp_t gfp)
>  
>               scrub_page(page);
>  
> -             if (xen_pv_domain() && !PageHighMem(page)) {
> +             if (xen_pvh_domain() && !PageHighMem(page)) {
> +                     unsigned int level;
> +                     pte_t *ptep;
> +                     void *addr = __va(pfn << PAGE_SHIFT);
> +                     ptep = lookup_address((unsigned long)addr, &level);
> +                     set_pte(ptep, __pte(0));
> +
> +             } else if (xen_pv_domain() && !PageHighMem(page)) {
>                       ret = HYPERVISOR_update_va_mapping(
>                               (unsigned long)__va(pfn << PAGE_SHIFT),
>                               __pte_ma(0), 0);
> @@ -433,7 +448,8 @@ static enum bp_state decrease_reservation(unsigned long 
> nr_pages, gfp_t gfp)
>       /* No more mappings: invalidate P2M and add to balloon. */
>       for (i = 0; i < nr_pages; i++) {
>               pfn = mfn_to_pfn(frame_list[i]);
> -             __set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
> +             if (!xen_pvh_domain())
> +                     __set_phys_to_machine(pfn, INVALID_P2M_ENTRY);
>               balloon_append(pfn_to_page(pfn));
>       }
>  



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.