|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH 09/10] [RFC only] xen: iommu: remove last pcidevs_lock() calls in iommu
Hi Stefano,
Stefano Stabellini <sstabellini@xxxxxxxxxx> writes:
> On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
>> There are number of cases where pcidevs_lock() is used to protect
>> something that is not related to PCI devices per se.
>>
>> Probably pcidev_lock in these places should be replaced with some
>> other lock.
>>
>> This patch is not intended to be merged and is present only to discuss
>> this use of pcidevs_lock()
>>
>> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>
>
> I wonder if we are being too ambitious: is it necessary to get rid of
> pcidevs_lock completely, without replacing all its occurrences with
> something else?
>
> Because it would be a lot easier to only replace pcidevs_lock with
> pdev->lock, replacing the global lock with a per-device lock. That alone
> would be a major improvement and would be far easier to verify its
> correctness.
This is the aim of this patch series. We can't just replace pcidevs_lock
with pdev->lock, because pcidevs_lock() locks not only PCI devices, but
a PCI subsystem as a whole. As I wrote on the cover letter, I identified
list of things that pcidevs_lock protect and tried to create separate
locks for them.
>
> While this patch and the previous patch together remove all occurrences
> of pcidevs_lock without adding pdev->lock, which is difficult to prove
> correct.
Previous patch removes occurrences of pcidevs_lock() in places which are
already protected with new locks. Sometimes, this is d->pdevs_lock,
sometimes it is sufficient to call pcidev_get() to increase refcounter,
in other cases we need to call pdev_lock(pdev), ...
And goal of this patch is to discuss pieces which left. As you can see,
there is no pointer to pdev to lock, this code does not traverse
d->pdev_list, etc. So it is not immediately obvious what exactly those
ASSERTs should protect. Maybe, they were added by mistake and are not
needed, actually. Maybe I missing some nuance of x86 IOMMU workings. I
really need maintainer's advice there.
>
>
>> ---
>> xen/drivers/passthrough/vtd/intremap.c | 2 --
>> xen/drivers/passthrough/vtd/iommu.c | 5 -----
>> xen/drivers/passthrough/x86/iommu.c | 5 -----
>> 3 files changed, 12 deletions(-)
>>
>> diff --git a/xen/drivers/passthrough/vtd/intremap.c
>> b/xen/drivers/passthrough/vtd/intremap.c
>> index 1512e4866b..44e3b72f91 100644
>> --- a/xen/drivers/passthrough/vtd/intremap.c
>> +++ b/xen/drivers/passthrough/vtd/intremap.c
>> @@ -893,8 +893,6 @@ int pi_update_irte(const struct pi_desc *pi_desc, const
>> struct pirq *pirq,
>>
>> spin_unlock_irq(&desc->lock);
>>
>> - ASSERT(pcidevs_locked());
>> -
>> return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
>>
>> unlock_out:
>> diff --git a/xen/drivers/passthrough/vtd/iommu.c
>> b/xen/drivers/passthrough/vtd/iommu.c
>> index 87868188b7..9d258d154d 100644
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> @@ -127,8 +127,6 @@ static int context_set_domain_id(struct context_entry
>> *context,
>> {
>> unsigned int i;
>>
>> - ASSERT(pcidevs_locked());
>> -
>> if ( domid_mapping(iommu) )
>> {
>> unsigned int nr_dom = cap_ndoms(iommu->cap);
>> @@ -1882,7 +1880,6 @@ int domain_context_unmap_one(
>> int iommu_domid, rc, ret;
>> bool_t flush_dev_iotlb;
>>
>> - ASSERT(pcidevs_locked());
>> spin_lock(&iommu->lock);
>>
>> maddr = bus_to_context_maddr(iommu, bus);
>> @@ -2601,7 +2598,6 @@ static void __hwdom_init setup_hwdom_rmrr(struct
>> domain *d)
>> u16 bdf;
>> int ret, i;
>>
>> - pcidevs_lock();
>> for_each_rmrr_device ( rmrr, bdf, i )
>> {
>> /*
>> @@ -2616,7 +2612,6 @@ static void __hwdom_init setup_hwdom_rmrr(struct
>> domain *d)
>> dprintk(XENLOG_ERR VTDPREFIX,
>> "IOMMU: mapping reserved region failed\n");
>> }
>> - pcidevs_unlock();
>> }
>>
>> static struct iommu_state {
>> diff --git a/xen/drivers/passthrough/x86/iommu.c
>> b/xen/drivers/passthrough/x86/iommu.c
>> index f671b0f2bb..4e94ad15df 100644
>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -207,7 +207,6 @@ int iommu_identity_mapping(struct domain *d,
>> p2m_access_t p2ma,
>> struct identity_map *map;
>> struct domain_iommu *hd = dom_iommu(d);
>>
>> - ASSERT(pcidevs_locked());
>> ASSERT(base < end);
>>
>> /*
>> @@ -479,8 +478,6 @@ domid_t iommu_alloc_domid(unsigned long *map)
>> static unsigned int start;
>> unsigned int idx = find_next_zero_bit(map, UINT16_MAX - DOMID_MASK,
>> start);
>>
>> - ASSERT(pcidevs_locked());
>> -
>> if ( idx >= UINT16_MAX - DOMID_MASK )
>> idx = find_first_zero_bit(map, UINT16_MAX - DOMID_MASK);
>> if ( idx >= UINT16_MAX - DOMID_MASK )
>> @@ -495,8 +492,6 @@ domid_t iommu_alloc_domid(unsigned long *map)
>>
>> void iommu_free_domid(domid_t domid, unsigned long *map)
>> {
>> - ASSERT(pcidevs_locked());
>> -
>> if ( domid == DOMID_INVALID )
>> return;
>>
>> --
>> 2.36.1
>>
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |