[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 05/13] x86/altp2m: basic data structures and support routines.
On Tue, Jul 7, 2015 at 5:19 PM, Ed White <edmund.h.white@xxxxxxxxx> wrote: > On 07/07/2015 08:22 AM, Tim Deegan wrote: >> At 16:04 +0100 on 07 Jul (1436285059), George Dunlap wrote: >>> On 07/01/2015 07:09 PM, Ed White wrote: >>>> diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h >>>> index b4f035e..301ca59 100644 >>>> --- a/xen/arch/x86/mm/mm-locks.h >>>> +++ b/xen/arch/x86/mm/mm-locks.h >>>> @@ -217,7 +217,7 @@ declare_mm_lock(nestedp2m) >>>> #define nestedp2m_lock(d) mm_lock(nestedp2m, &(d)->arch.nested_p2m_lock) >>>> #define nestedp2m_unlock(d) mm_unlock(&(d)->arch.nested_p2m_lock) >>>> >>>> -/* P2M lock (per-p2m-table) >>>> +/* P2M lock (per-non-alt-p2m-table) >>>> * >>>> * This protects all queries and updates to the p2m table. >>>> * Queries may be made under the read lock but all modifications >>>> @@ -225,10 +225,44 @@ declare_mm_lock(nestedp2m) >>>> * >>>> * The write lock is recursive as it is common for a code path to look >>>> * up a gfn and later mutate it. >>>> + * >>>> + * Note that this lock shares its implementation with the altp2m >>>> + * lock (not the altp2m list lock), so the implementation >>>> + * is found there. >>>> */ >>>> >>>> declare_mm_rwlock(p2m); >>>> -#define p2m_lock(p) mm_write_lock(p2m, &(p)->lock); >>>> + >>>> +/* Alternate P2M list lock (per-domain) >>>> + * >>>> + * A per-domain lock that protects the list of alternate p2m's. >>>> + * Any operation that walks the list needs to acquire this lock. >>>> + * Additionally, before destroying an alternate p2m all VCPU's >>>> + * in the target domain must be paused. >>>> + */ >>>> + >>>> +declare_mm_lock(altp2mlist) >>>> +#define altp2m_lock(d) mm_lock(altp2mlist, &(d)->arch.altp2m_lock) >>>> +#define altp2m_unlock(d) mm_unlock(&(d)->arch.altp2m_lock) >>>> + >>>> +/* P2M lock (per-altp2m-table) >>>> + * >>>> + * This protects all queries and updates to the p2m table. >>>> + * Queries may be made under the read lock but all modifications >>>> + * need the main (write) lock. >>>> + * >>>> + * The write lock is recursive as it is common for a code path to look >>>> + * up a gfn and later mutate it. >>>> + */ >>>> + >>>> +declare_mm_rwlock(altp2m); >>>> +#define p2m_lock(p) \ >>>> +{ \ >>>> + if ( p2m_is_altp2m(p) ) \ >>>> + mm_write_lock(altp2m, &(p)->lock); \ >>>> + else \ >>>> + mm_write_lock(p2m, &(p)->lock); \ >>>> +} >>>> #define p2m_unlock(p) mm_write_unlock(&(p)->lock); >>>> #define gfn_lock(p,g,o) p2m_lock(p) >>>> #define gfn_unlock(p,g,o) p2m_unlock(p) >>> >>> I've just taken on the role of mm maintainer, and so this late in a >>> series, having Tim's approval and also having Andy's reviewed-by, I'd >>> normally just skim through and Ack it as a matter of course. But I just >>> wouldn't feel right giving this my maintainer's ack without >>> understanding the locking a bit better. So please bear with me here. >>> >>> I see in the cover letter that you "sandwiched" the altp2mlist lock >>> between p2m and altp2m at Tim's suggestion. But I can't find the >>> discussion where that was suggested (it doesn't seem to be in response >>> to v1 patch 5), >> >> I suggested changing the locking order here: >> http://lists.xenproject.org/archives/html/xen-devel/2015-01/msg01824.html >> >> Cheers, >> >> Tim. >> > > And Tim, Andrew and I subsequently discussed this specific approach > in a phone meeting. Right; but it's still important to have all information necessary to understand the patch series in the code, comments, and changelogs. I can't go back and listen to your conversation, and although a reasonable amount of inference from the code can be expected, anything complicated should be made explicit. (Perhaps it is explained further on in the series, in which case the above comment may not require any action.) -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |