[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v5 03/14] vpci: move lock outside of struct vpci
On 25.11.2021 12:02, Oleksandr Andrushchenko wrote: > @@ -68,12 +84,13 @@ int vpci_add_handlers(struct pci_dev *pdev) > /* We should not get here twice for the same device. */ > ASSERT(!pdev->vpci); > > - pdev->vpci = xzalloc(struct vpci); > - if ( !pdev->vpci ) > + vpci = xzalloc(struct vpci); > + if ( !vpci ) > return -ENOMEM; > > + spin_lock(&pdev->vpci_lock); > + pdev->vpci = vpci; > INIT_LIST_HEAD(&pdev->vpci->handlers); > - spin_lock_init(&pdev->vpci->lock); INIT_LIST_HEAD() can occur ahead of taking the lock, and can also act on &vpci->handlers rather than &pdev->vpci->handlers. > for ( i = 0; i < NUM_VPCI_INIT; i++ ) > { > @@ -83,7 +100,8 @@ int vpci_add_handlers(struct pci_dev *pdev) > } This loop wants to live in the locked region because you need to install vpci into pdev->vpci up front, afaict. I wonder whether that couldn't be relaxed, but perhaps that's an improvement that can be thought about later. The main reason I'm looking at this closely is because from the patch title I didn't expect new locking regions to be introduced right here; instead I did expect strictly a mechanical conversion. > @@ -152,8 +170,6 @@ int vpci_add_register(struct vpci *vpci, vpci_read_t > *read_handler, > r->offset = offset; > r->private = data; > > - spin_lock(&vpci->lock); >From the description I can't deduce why this lock is fine to go away now, i.e. that all call sites have the lock now acquire earlier. Therefore I'd expect at least an assertion to be left here ... > @@ -183,7 +197,6 @@ int vpci_remove_register(struct vpci *vpci, unsigned int > offset, > const struct vpci_register r = { .offset = offset, .size = size }; > struct vpci_register *rm; > > - spin_lock(&vpci->lock); ... and here. > @@ -370,6 +386,7 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, > unsigned int size) > break; > ASSERT(data_offset < size); > } > + spin_unlock(&pdev->vpci_lock); > > if ( data_offset < size ) > { > @@ -379,7 +396,6 @@ uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg, > unsigned int size) > > data = merge_result(data, tmp_data, size - data_offset, data_offset); > } > - spin_unlock(&pdev->vpci->lock); > > return data & (0xffffffff >> (32 - 8 * size)); > } Here and ... > @@ -475,13 +498,12 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg, > unsigned int size, > break; > ASSERT(data_offset < size); > } > + spin_unlock(&pdev->vpci_lock); > > if ( data_offset < size ) > /* Tailing gap, write the remaining. */ > vpci_write_hw(sbdf, reg + data_offset, size - data_offset, > data >> (data_offset * 8)); > - > - spin_unlock(&pdev->vpci->lock); > } ... even more so here I'm not sure of the correctness of the moving you do: While pdev->vpci indeed doesn't get accessed, I wonder whether there wasn't an intention to avoid racing calls to vpci_{read,write}_hw() this way. In any event I think such movement would need justification in the description. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |