[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Provide support for multiple frame buffers in Xen.


  • To: "Tim (Xen.org)" <tim@xxxxxxx>
  • From: Robert Phillips <robert.phillips@xxxxxxxxxx>
  • Date: Mon, 22 Oct 2012 13:45:47 -0400
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc: "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Mon, 22 Oct 2012 17:46:23 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: Ac2wb7eDSb0IfdX+ReOhythmnejcygAAV/mA
  • Thread-topic: [Xen-devel] [PATCH] Provide support for multiple frame buffers in Xen.

Tim,

Thank you for taking the time (and I expect it was a considerable time) to 
review this patch.

And thanks for pointing out the coding style nits.  I'll re-submit the patch 
when the dust settles regarding its contents.


re " You don't say why it's useful to have multiple framebuffers ..." 
Yes, in a laptop environment there can be multiple monitors, each with its own 
frame buffer, and the monitors can be plugged/unplugged dynamically.
So the number of monitors and the location and size of their framebuffers is 
dynamic.


re: The " complex, buggy and unnecessary " code eliminated by this patch.
I was refering to the function paging_log_dirty_range().

I believe the bug was toward the end of the function where it used to call  
clear_page(l1)
The function copies bits from l1 into a temporary bitmap, then copies them from 
there to the user-provided dirty_bitmap.
When it's done, it clears the page at l1.
But two framebuffers might cohabit that page, not overlapping but at 
distinction areas within it.
Reading the dirtiness for one frame buffer and then clearing the whole page 
wipes out information "owned" by the other frame buffer.
This bug would not show up if there is only one frame buffer so your live 
migration code is ok.

The old code wasn't really necessary because there is an easier way of telling 
if some page is dirty.
Yes, one can look at the dirty bit as the old code did.
Or one can check if the page's type was switched to p2m_ram_rw, which is what 
the new function hap_clean_vram_tracking_range() does.

In terms of complexity, the new hap_clean_vram_tracking_range() is much more 
simple than the old paging_log_dirty_range().  


re: " implemented log-dirty in two new ways"

The HAP code path scans EPT entries.  Obviously the shadow code does not.
The shadow code does substantial bookkeeping, which I'll get to in a minute, 
which the HAP path does not.
Although HAP does bookkeep the set of ranges, all the linked-list stuff isn't 
used or allocated.

In shadow mode the new code makes a clean separation between determining the 
interesting set of PTEs and examining those PTEs looking for dirty bits.

When a new range is detected, the code *does* scan all PTEs looking for ones 
that map into the new range but (unlike the old code) that is a one-time event.
>From then on the range's pages are kept by "dead reckoning", i.e. hooking all 
>changes to PTEs to see if they pertain to the range.
Unfortunately when a process terminates it doesn't tear down its page tables;  
it just abandons them.
So a range can get left pointing to a PTE that is no longer in a page table 
page.
But, in due time, the shadow code recognizes this and removes them.  The new 
code hooks that too, and updates its ranges accordingly.
When a no-longer-in-use range's pages have all been recycled, the range deletes 
itself.

So the overhead of  bookkeeping a range's set of interesting PTEs is low.

And when it's time to look for dirty bits, we know precisely which PTEs to look 
at.
The old code used to scan all page tables periodically and we would see a 
performance hit with precisely that periodicity.

One unfortunate bit of complexity relates to the fact that several PTEs can map 
to the same guest physical page.
We have to bookkeep them all, so each PTEs that maps to a guest physical page 
must be represented by its own dv_paddr_link, 
and, for the set that relate to the same guest page, they are all linked 
together.
The head of the linked list is the entry in the range's pl_tab array that 
corresponds to that guest physical page.


re: " since we already maintain a sparse bitmap to hold the dirty log"
I don't believe the dirty log is maintained except when page_mode is set for 
PG_log_dirty.
That mode exists for live migrate and the discipline for entering/leaving it is 
quite different than the finer granularity needed for dirty vram.

Thanks
-- rsp
Robert Phillips 
Principal Software Engineer,  XenClient - Citrix - Westford


-----Original Message-----
From: Tim Deegan [mailto:tim@xxxxxxx] 
Sent: Monday, October 22, 2012 12:10 PM
To: Robert Phillips
Cc: xen-devel@xxxxxxxxxxxxx; Robert Phillips
Subject: Re: [Xen-devel] [PATCH] Provide support for multiple frame buffers in 
Xen.

At 14:15 -0400 on 16 Oct (1350396902), Robert Phillips wrote:
> From: Robert Phillips <robert.phillips@xxxxxxxxxxxxxxxxxxx>
> 
> Support is provided for both shadow and hardware assisted paging (HAP) modes.
> This code bookkeeps the set of video frame buffers (vram),
> detects when the guest has modified any of those buffers and, upon request,
> returns a bitmap of the modified pages.
> 
> This lets other software components re-paint the portions of the monitor (or 
> monitors) that have changed.
> Each monitor has a frame buffer of some size at some position in guest 
> physical memory.
> The set of frame buffers being tracked can change over time as monitors are 
> plugged and unplugged.
> 
> Signed-Off-By: Robert Phillips <robert.phillips@xxxxxxxxxx>
> ---
>  xen/arch/x86/hvm/Makefile            |    3 +-
>  xen/arch/x86/hvm/dirty_vram.c        |  878 
> ++++++++++++++++++++++++++++++++++
>  xen/arch/x86/hvm/hvm.c               |    4 +-
>  xen/arch/x86/mm/hap/hap.c            |  140 +-----
>  xen/arch/x86/mm/paging.c             |  232 ++++-----
>  xen/arch/x86/mm/shadow/common.c      |  335 +++++++------
>  xen/arch/x86/mm/shadow/multi.c       |  169 +++----
>  xen/arch/x86/mm/shadow/multi.h       |    7 +-
>  xen/arch/x86/mm/shadow/types.h       |    1 +
>  xen/include/asm-x86/hap.h            |    4 -
>  xen/include/asm-x86/hvm/dirty_vram.h |  157 ++++++
>  xen/include/asm-x86/hvm/domain.h     |    2 +-
>  xen/include/asm-x86/paging.h         |   22 +-
>  xen/include/asm-x86/shadow.h         |    6 -
>  14 files changed, 1403 insertions(+), 557 deletions(-)

Wow.  That's a bunch of code! :)  Thanks for sending it, and for the
document too.

You don't say why it's useful to have multiple framebuffers -- I take it
this is useful for laptop environments?  Also, I'm a bit surprised not
to see some hypercall API changes to go with it.

Reading the PDF you sent, it looks like you've implemented log-dirty in
two new ways: by scanning the shadow PTEs and by scanning EPT entries.
But you also leave the old ways (trap-on-access and populate a bitmap) 
despite saying "The eliminated code was complex, buggy and unnecessary".
If the bitmap-tracking code is really buggy we need to know about it, as
that's what we use for live migration!  What's the problem with it?

I've had a brief skim of the code, and it looks rather over-complex for
what it does.  I'm not sure what the purpose of all these linked lists
of ranges is -- couldn't this just be done with a struct rangeset
describing which areas are being tracked?  Then since we already
maintain a sparse bitmap to hold the dirty log we don't need any other
metadata, or other ways of checking dirtiness.

Style nits: you're not consistently following the Xen coding style, in
particular the spaces around parentheses in 'if's and 'for's.  Also
you've got some code in new files that's clearly at least derived from
old code but just got your own name and copyright at the head of the
file.  Please carry the copyright over from old code when you move it.

Cheers,

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.