[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 1/2] gnttab: Introduce rwlock to protect updates to grant table state



>>> On 12.01.15 at 17:03, <chegger@xxxxxxxxx> wrote:
> On 2015/01/12 16:09, Jan Beulich wrote:
>>>>> On 09.01.15 at 16:12, <chegger@xxxxxxxxx> wrote:
>>> @@ -899,26 +899,27 @@ __gnttab_unmap_common(
>>>  
>>>      op->frame = (unsigned long)(op->dev_bus_addr >> PAGE_SHIFT);
>>>  
>>> +    read_lock(&lgt->lock);
>>>      if ( unlikely(op->handle >= lgt->maptrack_limit) )
>>>      {
>>> +        read_unlock(&lgt->lock);
>>>          gdprintk(XENLOG_INFO, "Bad handle (%d).\n", op->handle);
>>>          op->status = GNTST_bad_handle;
>>>          return;
>>>      }
>>>  
>>>      op->map = &maptrack_entry(lgt, op->handle);
>>> -    spin_lock(&lgt->lock);
>> 
>> The extension of the locked region is still not being mentioned in the
>> description - as said for v3, if this is really needed, it should then be
>> fixed by a separate, much smaller change. (The main reason why
>> the first if() doesn't need to happen under lock is - afair - that
>> lgt->maptrack_limit can only ever increase.)
> 
> It is mentioned in the doc change to docs/misc/grant-tables.txt:
> 
> + The maptrack state is protected by its own spinlock. Any access (read
> + or write) of struct grant_table members that have a "maptrack_"
> + prefix must be made while holding the maptrack lock. The maptrack
> + state can be rapidly modified under some workloads, and the critical
> + sections are very small, thus we use a spinlock to protect them.

That's only a very remote connection. If there are (perceived)
locking deficiencies, they should be called out explicitly (and
fixed separately). But as mentioned - I think you should not be
altering locking here.

>>> @@ -939,7 +940,8 @@ __gnttab_unmap_common(
>>>      TRACE_1D(TRC_MEM_PAGE_GRANT_UNMAP, dom);
>>>  
>>>      rgt = rd->grant_table;
>>> -    double_gt_lock(lgt, rgt);
>>> +    read_lock(&rgt->lock);
>>> +    double_maptrack_lock(lgt, rgt);
>> 
>> Repeating what I said for v3: "The nesting of the two locks should
>> be mentioned in the doc change" (at the very least).
> 
> ... to be removed again in the second patch?
> double_maptrack_lock() goes away in the second patch.

Yes, sorry, any patch series needs to be fully bisectable (or at
the very least no intentionally be broken, knowing that a later
patch will fix the brokenness). If a patch really depends on a
later one to fix it, the patches shouldn't have been split.

>>> @@ -1290,10 +1293,12 @@ gnttab_unpopulate_status_frames(struct domain *d, 
>>> struct grant_table *gt)
>>>      gt->nr_status_frames = 0;
>>>  }
>>>  
>>> +/* Grow the grant table. The caller must hold the grant table's
>>> +   write lock before calling this function. */
>> 
>> Above you said you fixed the coding style issues, but at least here
>> you didn't.
> 
> The comments match with the style as done everywhere in that file.

The existing style in the file is not the reference; please see
./CODING_STYLE.

>>> @@ -2909,7 +2919,7 @@ gnttab_release_mappings(
>>>          }
>>>  
>>>          rgt = rd->grant_table;
>>> -        spin_lock(&rgt->lock);
>>> +        read_lock(&rgt->lock);
>>>  
>>>          act = &active_entry(rgt, ref);
>>>          sha = shared_entry_header(rgt, ref);
>>> @@ -2969,7 +2979,7 @@ gnttab_release_mappings(
>>>          if ( act->pin == 0 )
>>>              gnttab_clear_flag(_GTF_reading, status);
>>>  
>>> -        spin_unlock(&rgt->lock);
>>> +        read_unlock(&rgt->lock);
>> 
>> Repeating the question on v3: "The maptrack entries are being
>> accessed between these two - don't you need both locks here?"
> 
> yes, and be removed in the second patch again.
> 
>> Overall I find it quite unfriendly (wasting reviewing bandwidth) to
>> submit a new version with a meaningful number of comments on the
>> previous version un-addressed.
> 
> Sorry, I lost track of what I did and what I did not.

But it can't be that difficult to go through the comments you got
on the previous version again before you send out the new one.
Please appreciate that reviewing bandwidth is limited.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.