[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Provide support for multiple frame buffers in Xen



At 06:38 -0500 on 05 Feb (1360046289), Robert Phillips wrote:
> > > +        ext = __map_domain_page(pg);
> > > +        /* Is unmapped in dirty_vram_free() */
> > 
> > Mappings from map_domain_page() can't be kept around like this.  They're
> > supposed to be short-term, and in systems where we don't have a full 1-1
> > map of memory (e.g. x86 once Jan's 16TB-support series goes in) there are a
> > limited number of mapping slots.
> > 
> > Jan, what do you recommend here?  These are pages of linked-list entries,
> > part of the p2m/mm overhead and so allocated from the guest's shadow
> > memory.  Are we going to have to allocate them from xenheap instead or is
> > there any way to avoid that?
> 
> One possible solution is to use a fixmap table. 
> Rather than storing VAs in these tables, the code could store
> entries containing physical_page/offset_in_page. 
> 
>  When managing the table
> the code could remap PA/offset into a single fixed virtual address
> (and clear its TLB).  Clearing a single TLB shouldn't hurt
> performance too terribly, right?

It shoud be OK, but I think you might as well use the existing
map_domain_page() code for this, rather than adding a new fixmap -- that
uses a ring of mappings and amortises a single TLB flush every time the
ring is filled up.

So I suggest storing a maddr in the table and using map/unmap_domain_page()
+ offset to access them.  The existing shadow_track_dirty_vram()
function does this (search for sl1ma).

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.