[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1 of 2] Use memops for mem paging, sharing, and access, instead of domctls



On Thu, 2012-02-02 at 14:25 +0000, Andres Lagar-Cavilla wrote:
> > On Wed, 2012-02-01 at 20:09 +0000, Andres Lagar-Cavilla wrote:
> >>
> >>
> >> +int xc_mem_event_memop(xc_interface *xch, domid_t domain_id,
> >> +                        unsigned int op, unsigned int mode,
> >> +                        uint64_t gfn, void *buffer)
> >> +{
> >> +    xen_mem_event_op_t meo;
> >> +
> >> +    memset(&meo, 0, sizeof(meo));
> >> +
> >> +    meo.op      = op;
> >> +    meo.domain  = domain_id;
> >> +    meo.gfn     = gfn;
> >> +    meo.buffer  = (unsigned long) buffer;
> >
> > Hypercall arguments need to use the xc_hypercall_buffer stuff in order
> > to ensure that the memory is safe to access from a hypercall (in
> > particular that it is mlocked or similar)-- I don't see where that
> > happens for this buffer.
> >
> > meo itself is bounced by do_memory_op so that is OK.
> >
> > I was also a little suspicious of ring_addr and shared_addr in
> > xc_mem_event_control in this regard.
> >
> > Or does the mem sharing code make it's own arrangements for these sorts
> > of things somewhere?
> 
> It's rather unclean. They do not get mlock'ed (and there is no sanity
> check to ensure they're page-aligned). They should follow the pattern
> established in xc_mem_paging_load. I'll get on it, probably a separate
> patch.

If you are reworking this It Would Be Nice If (tm) you could make it use
xc_hypercall_buffer_alloc and friends to allocate and manage the buffer
while you are there e.g. the void *buffer above would become struct
xc_hypercall_buffer *buffer -- see xc_perfc_query for example.

It happens that mlock has roughly the semantics we need (and so it
mostly works) but it is not guaranteed by the specs. At some point we
may wish to do something more robust and using the common interface will
mean that the sharing code will benefit from that too.

> There is a far more insidious problem that I'm not sure how to solve. Say
> the pager dies unexpectedly. The page hosting the ring is returned to
> dom0's free page pool, and then re-assigned to some other process. Xen
> never gets to find out. At best, the domain crashes before an event is
> generated. Perhaps, Xen posts at least one event. Worst-case scenario, the
> guest generates a spate of events. Cases 2 & 3 corrupt the memory of some
> other dom0 process.

How does the memory actually get mapped? Or is it actually memory
belonging to the user process?

The gntalloc driver presumably has a similar sort of problem, might be
interesting to take inspiration from there?

There's the possibility of having a paging specific driver which takes
care of this and all the userspace does is mmap /dev/xen/pager or
whatever.

> I'm glad I got that off my chest :) While being a very rare race condition
> (haven't actually experienced it), I am quite unsure of how to tackle it
> cleanly.
> 
> Andres
> >
> > Ian.
> >
> >> +
> >> +    return do_memory_op(xch, mode, &meo, sizeof(meo));
> >> +}
> >
> >
> >
> >
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.