[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] get_gfn_query() locking

At 13:04 -0400 on 31 Oct (1351688673), Andres Lagar-Cavilla wrote:
> On Oct 30, 2012, at 11:06 AM, Tim Deegan <tim@xxxxxxx> wrote:
> > At 10:53 -0400 on 30 Oct (1351594401), Andres Lagar-Cavilla wrote:
> >>>> And then again, with the p2m lock being recursive these
> >>>> days, I don't think there's any harm calling the other methods
> >>>> here with that lock held.
> >> 
> >> Is the patch you refer to 
> >> http://www.gossamer-threads.com/lists/xen/devel/261025 and the hunk in 
> >> question the following?
> >> + get_gfn_query(d, pfn, &pt); 
> >> + p2m_change_type(d, pfn, pt, p2m_ram_broken); 
> >> + put_gfn(d, pfn); 
> >> 
> >> There really is no way to get rid of that p2m lock-protected critical
> >> section if the domain allows for paging etc. You might want to
> >> introduce a syntactically cleaner unconditional p2m_change_type
> >> variant that doesn't cmpxchg with the previous type -- that is
> >> effectively what goes on here. Should be a tiny amount of refactoring
> >> and the code will be cleaner, no need for query or put.
> > 
> > I don't think that change-type is even what's wanted here.  You want to
> > use some more raw form of set_p2m_entry(), since keeping the MFN is not
> > important.  How about:
> > 
> > guest_physmap_add_entry(d, pfn, MFN_INVALID, p2m_ram_broken);
> > 
> Has the side effect of leaking the current mfn (if any) in the p2m entry for 
> the remainder of the lifetime of the domain.

Since this call is made whren the underlying MFN is broken and must
never be touched, that's actually a good thing. :)

> Digression: is this something we want to just go ahead and fix? Or is
> that semantic intended?

The xenmem_add_to_physmap() code is basically a wrapper around this that
DTRT with the thing that was there before.  It Would Be Nice to audit
all the various users of that code an make them consistent, though.
Goes on the list...


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.