[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3 of 9] Enforce ordering constraints for the page alloc lock in the PoD code



At 06:59 -0700 on 02 Nov (1320217144), andres@xxxxxxxxxxxxxxxx wrote:
> > - I think it would be better to generate generic spin-lock-with-level
> >   and unlock-with-level wrapper functions rather than generating the
> >   various checks and having to assemble them into lock_page_alloc() and
> >   unlock_page_alloc() by hand.
> 
> The final intent is to have these macros establish ordering constraints
> for the fine-grained p2m lock, which is not only "grab a spinlock".
> Granted, we do not know yet whether we'll need such a fine-grained
> approach, but I think it's worth keeping things separate.

OK.  We can keep it as it is for now and maybe there'sll be an
opportubnity to tidy up later on. 

> As a side-note, an earlier version of my patches did enforce ordering,
> except things got really hairy with mem_sharing_unshare_page (which would
> jump levels up to grab shr_lock) and pod sweeps. I (think I) have
> solutions for both, but I'm not ready to push those yet.

Great!

> > - p2m->pod.page_alloc_unlock_level is wrong, I think; I can see that you
> >   need somewhere to store the unlock-level but it shouldn't live in
> >   the p2m state - it's at most a per-domain variable, so it should
> >   live in the struct domain; might as well be beside the lock itself.
> 
> Ok, sure. Although I think I need to make clear that this ordering
> constraint only applies within the pod code, and that's why I wanted to
> keep the book-keeping within the pod struct.

I see.  That makes sense, but since there are now multiple p2m structs
per domain, I think it's better to put it beside the lock with a comment
saying that it's only used by pod. 

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.