[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 1/6] xen: improve changes to xen_add_to_physmap



On Fri, 14 Sep 2012, Ian Campbell wrote:
> On Fri, 2012-09-14 at 16:44 +0100, Stefano Stabellini wrote:
> > On Fri, 14 Sep 2012, Jan Beulich wrote:
> > > >>> On 14.09.12 at 15:56, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> > > >> > > Might we prefer to have a batched version of this call? I don't 
> > > >> > > think
> > > >> > > we can shoehorn the necessary fields into xen_add_to_physmap_t 
> > > >> > > though.
> > > >> > 
> > > >> > Actually, talking about this we Stefano we think we can. Since both 
> > > >> > idx
> > > >> > and gpfn are unsigned longs they can become unions with a 
> > > >> > GUEST_HANDLE
> > > >> > without changing the ABI on x86_32 or x86_64.
> > > > 
> > > > Except we need both foreign_domid and size, which currently overlap, and
> > > > there's no other spare bits. Damn.
> > > > 
> > > > Looks like XENMEM_add_to_physmap_range is the only option :-(
> > > 
> > > You could use the first entry in each array to specify the counts,
> > > which would at once allow mixed granularity on each list (should
> > > that ever turn out useful); initially you would certainly want both
> > > counts to match.
> > 
> > I think it would be better from the interface point of view to have idx
> > become the number of frames, and gpfn a pointer (a GUEST_HANDLE) to a
> > struct that contains both arrays.
> 
> We'd need to slip in enough info to allow continuations too. Perhaps
> that can be done by manipulating the size and the handle to keep track
> of progress. Or maybe we can get away with just just frobbing the size
> if we process the array backwards.
> 
> Which is all sounding pretty gross. I'm very temped to add
> add_to_physmap2...

Yeah, I am OK with that.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.