[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v12.2 01/15] vpci: use per-domain PCI lock to protect vpci structure
On Wed, Jan 24, 2024 at 09:48:35AM +0100, Jan Beulich wrote: > On 23.01.2024 16:07, Roger Pau Monné wrote: > > On Tue, Jan 23, 2024 at 03:32:12PM +0100, Jan Beulich wrote: > >> On 15.01.2024 20:43, Stewart Hildebrand wrote: > >>> @@ -2888,6 +2888,8 @@ int allocate_and_map_msi_pirq(struct domain *d, int > >>> index, int *pirq_p, > >>> { > >>> int irq, pirq, ret; > >>> > >>> + ASSERT(pcidevs_locked() || rw_is_locked(&d->pci_lock)); > >> > >> If either lock is sufficient to hold here, ... > >> > >>> --- a/xen/arch/x86/physdev.c > >>> +++ b/xen/arch/x86/physdev.c > >>> @@ -123,7 +123,9 @@ int physdev_map_pirq(domid_t domid, int type, int > >>> *index, int *pirq_p, > >>> > >>> case MAP_PIRQ_TYPE_MSI: > >>> case MAP_PIRQ_TYPE_MULTI_MSI: > >>> + pcidevs_lock(); > >>> ret = allocate_and_map_msi_pirq(d, *index, pirq_p, type, msi); > >>> + pcidevs_unlock(); > >>> break; > >> > > > > IIRC (Stewart can further comment) this is done holding the pcidevs > > lock to keep the path unmodified, as there's no need to hold the > > per-domain rwlock. > > Yet why would we prefer to acquire a global lock when a per-domain one > suffices? I was hoping to introduce less changes, specially if they are not strictly required, as it's less risk. I'm always quite worry of locking changes. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |