[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/2] grant_table: convert grant table rwlock to percpu rwlock
On 18/11/15 12:07, Ian Campbell wrote: > On Wed, 2015-11-18 at 11:56 +0000, Malcolm Crossley wrote: >> On 18/11/15 11:50, Ian Campbell wrote: >>> On Wed, 2015-11-18 at 11:23 +0000, Malcolm Crossley wrote: >>>> On 18/11/15 10:54, Jan Beulich wrote: >>>>>>>> On 18.11.15 at 11:36, <ian.campbell@xxxxxxxxxx> wrote: >>>>>> On Tue, 2015-11-17 at 17:53 +0000, Andrew Cooper wrote: >>>>>>> On 17/11/15 17:39, Jan Beulich wrote: >>>>>>>>>>> On 17.11.15 at 18:30, <andrew.cooper3@xxxxxxxxxx> >>>>>>>>>>> wrote: >>>>>>>>> On 17/11/15 17:04, Jan Beulich wrote: >>>>>>>>>>>>> On 03.11.15 at 18:58, <malcolm.crossley@xxxxxxxxxx> >>>>>>>>>>>>> wrote: >>>>>>>>>>> --- a/xen/common/grant_table.c >>>>>>>>>>> +++ b/xen/common/grant_table.c >>>>>>>>>>> @@ -178,6 +178,10 @@ struct active_grant_entry { >>>>>>>>>>> #define _active_entry(t, e) \ >>>>>>>>>>> ((t)- >>>>>>>>>>>> active[(e)/ACGNT_PER_PAGE][(e)%ACGNT_PER_PAGE]) >>>>>>>>>>> >>>>>>>>>>> +bool_t grant_rwlock_barrier; >>>>>>>>>>> + >>>>>>>>>>> +DEFINE_PER_CPU(rwlock_t *, grant_rwlock); >>>>>>>>>> Shouldn't these be per grant table? And wouldn't doing so >>>>>>>>>> eliminate >>>>>>>>>> the main limitation of the per-CPU rwlocks? >>>>>>>>> The grant rwlock is per grant table. >>>>>>>> That's understood, but I don't see why the above items >>>>>>>> aren't, >>>>>>>> too. >>>>>>> >>>>>>> Ah - because there is never any circumstance where two grant >>>>>>> tables >>>>>>> are >>>>>>> locked on the same pcpu. >>>>>> >>>>>> So per-cpu rwlocks are really a per-pcpu read lock with a >>>>>> fallthrough >>>>>> to a >>>>>> per-$resource (here == granttable) rwlock when any writers are >>>>>> present for >>>>>> any instance $resource, not just the one where the write lock is >>>>>> desired, >>>>>> for the duration of any write lock? >>>>> >>>> >>>> The above description is the very good for for how the per-cpu >>>> rwlocks behave. >>>> The code stores a pointer to the per-$resource in the percpu area >>>> when a user is >>>> reading the per-$resource, this is why the lock is not safe if you >>>> take the lock >>>> for two different per-$resource simultaneously. The grant table code >>>> only takes >>>> one grant table lock at any one time so it is a safe user. >>> >>> So essentially the "per-pcpu read lock" as I called it is really in >>> essence >>> a sort of "byte lock" via the NULL vs non-NULL state of the per-cpu >>> pointer >>> to the underlying rwlock. >> >> It's not quite a byte lock because it stores a full pointer to the >> per-$resource >> that it's using. It could be changed to be a byte lock but then you will >> need a >> percpu area per-$resource. > > Right, I said "in essence sort of" and put scare quotes around the "byte > lock" since I realise it's not literally a byte lock. > > But really all I was getting was that it has locked and unlocked states in > some form or other. I was just concerned that people may not pick up on the subtle difference that the percpu read areas are used for multiple resources (of which none are locked simultaneously by the same CPU) where as byte locks are typically used to lock a particular resource and so you can safely lock multiple resource simultaneously on the same CPU. > > (Maybe I should have said "like a bit lock with 32 or 64 bits, setting any > of which corresponds to acquiring the lock" ;-)) > Not quite, setting the per cpu read area "takes" the read lock for the particular resource you passed into the percpu rwlock implementation. Writers of another resource ($resource1) will safely ignore readers of ($resource0). The global barrier will however make _all_ readers take the per-$resource read lock. An optimisation could be to have a barrier variable per-$resource (stored in the struct grant_table in this case). Malcolm > iAN. > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |