[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 09/10] [RFC only] xen: iommu: remove last pcidevs_lock() calls in iommu


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Date: Mon, 20 Feb 2023 00:41:46 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bm/x4gMFk9mespj5XNY7cCddxNQ9zxoscHnKGXOrl2I=; b=DqBZXuuw9X6Oj7oZI/48qlJgPsyvG/L4UReX+QdCdaPu2GSQW2MHmu4hO6dcAU2Fnpr/PkHIzig/12H6PQyUAWNq2iyQWMy42E70LXQqrUvJ+iO3IT5O5u5gxQzM3vycuRllaRSZBOyKJn3gpMg65V4Ry84Cr9TyG+E+DgW5TLC4RH9GtWj0pRyK3GE6S2JEjRWhXhnSpx/cF1XUGh/Dz78juwWRHW1b6+/OwbG8pAs/C9RVgQSVK/D3B4t1Haq9B6xt65xMC/e7CcVl4zAd4K/p2mH+UrBv5i+ao3jwsJKGM4ZAqjIAkT6ZciMwy1NoCJPgLI1LM35+IXunPM7Oqw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MVjh/qpFqJTahQpIxXgqgSrZqraK63YVC7FaCr3aa/dq/xVtMjkLq/EJlAXHty3ksYOLYk7iL9Tsgwr8HLmBrucNbIDqyyw+Gp2a5rZtaXq0IKr1uhRcKg908SIhAEybhSZeRzhcNr556D1SIMqvku1uJgGayfpLI9BY/byQGX5IkLc/71aSciLbm+2qwbgIeEJu95T7YR8JLiyJjXyerXaGyBjEuoN5s74wBeNYMbpaOWqukRdX9cgzn2BDrtQkBAhRzFvJ7upU2xjvdFFTPkPsjy/vauVD/9d1kRqaMpeLjIW2CBiDYrlltMppn7+ZMrJC8shHHMbm4l9WHWq9lg==
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Mon, 20 Feb 2023 00:42:33 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHYvUOAlPa0wkGiakOWSGDBM2amAa6z93GAgCQS54A=
  • Thread-topic: [RFC PATCH 09/10] [RFC only] xen: iommu: remove last pcidevs_lock() calls in iommu

Hi Stefano,

Stefano Stabellini <sstabellini@xxxxxxxxxx> writes:

> On Wed, 31 Aug 2022, Volodymyr Babchuk wrote:
>> There are number of cases where pcidevs_lock() is used to protect
>> something that is not related to PCI devices per se.
>> 
>> Probably pcidev_lock in these places should be replaced with some
>> other lock.
>> 
>> This patch is not intended to be merged and is present only to discuss
>> this use of pcidevs_lock()
>> 
>> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx>
>
> I wonder if we are being too ambitious: is it necessary to get rid of
> pcidevs_lock completely, without replacing all its occurrences with
> something else?
>

> Because it would be a lot easier to only replace pcidevs_lock with
> pdev->lock, replacing the global lock with a per-device lock. That alone
> would be a major improvement and would be far easier to verify its
> correctness.

This is the aim of this patch series. We can't just replace pcidevs_lock
with pdev->lock, because pcidevs_lock() locks not only PCI devices, but
a PCI subsystem as a whole. As I wrote on the cover letter, I identified
list of things that pcidevs_lock protect and tried to create separate
locks for them.

>
> While this patch and the previous patch together remove all occurrences
> of pcidevs_lock without adding pdev->lock, which is difficult to prove
> correct.

Previous patch removes occurrences of pcidevs_lock() in places which are
already protected with new locks. Sometimes, this is d->pdevs_lock,
sometimes it is sufficient to call pcidev_get() to increase refcounter,
in other cases we need to call pdev_lock(pdev), ...

And goal of this patch is to discuss pieces which left. As you can see,
there is no pointer to pdev to lock, this code does not traverse
d->pdev_list, etc. So it is not immediately obvious what exactly those
ASSERTs should protect. Maybe, they were added by mistake and are not
needed, actually. Maybe I missing some nuance of x86 IOMMU workings. I
really need maintainer's advice there.

>
>
>> ---
>>  xen/drivers/passthrough/vtd/intremap.c | 2 --
>>  xen/drivers/passthrough/vtd/iommu.c    | 5 -----
>>  xen/drivers/passthrough/x86/iommu.c    | 5 -----
>>  3 files changed, 12 deletions(-)
>> 
>> diff --git a/xen/drivers/passthrough/vtd/intremap.c 
>> b/xen/drivers/passthrough/vtd/intremap.c
>> index 1512e4866b..44e3b72f91 100644
>> --- a/xen/drivers/passthrough/vtd/intremap.c
>> +++ b/xen/drivers/passthrough/vtd/intremap.c
>> @@ -893,8 +893,6 @@ int pi_update_irte(const struct pi_desc *pi_desc, const 
>> struct pirq *pirq,
>>  
>>      spin_unlock_irq(&desc->lock);
>>  
>> -    ASSERT(pcidevs_locked());
>> -
>>      return msi_msg_write_remap_rte(msi_desc, &msi_desc->msg);
>>  
>>   unlock_out:
>> diff --git a/xen/drivers/passthrough/vtd/iommu.c 
>> b/xen/drivers/passthrough/vtd/iommu.c
>> index 87868188b7..9d258d154d 100644
>> --- a/xen/drivers/passthrough/vtd/iommu.c
>> +++ b/xen/drivers/passthrough/vtd/iommu.c
>> @@ -127,8 +127,6 @@ static int context_set_domain_id(struct context_entry 
>> *context,
>>  {
>>      unsigned int i;
>>  
>> -    ASSERT(pcidevs_locked());
>> -
>>      if ( domid_mapping(iommu) )
>>      {
>>          unsigned int nr_dom = cap_ndoms(iommu->cap);
>> @@ -1882,7 +1880,6 @@ int domain_context_unmap_one(
>>      int iommu_domid, rc, ret;
>>      bool_t flush_dev_iotlb;
>>  
>> -    ASSERT(pcidevs_locked());
>>      spin_lock(&iommu->lock);
>>  
>>      maddr = bus_to_context_maddr(iommu, bus);
>> @@ -2601,7 +2598,6 @@ static void __hwdom_init setup_hwdom_rmrr(struct 
>> domain *d)
>>      u16 bdf;
>>      int ret, i;
>>  
>> -    pcidevs_lock();
>>      for_each_rmrr_device ( rmrr, bdf, i )
>>      {
>>          /*
>> @@ -2616,7 +2612,6 @@ static void __hwdom_init setup_hwdom_rmrr(struct 
>> domain *d)
>>              dprintk(XENLOG_ERR VTDPREFIX,
>>                       "IOMMU: mapping reserved region failed\n");
>>      }
>> -    pcidevs_unlock();
>>  }
>>  
>>  static struct iommu_state {
>> diff --git a/xen/drivers/passthrough/x86/iommu.c 
>> b/xen/drivers/passthrough/x86/iommu.c
>> index f671b0f2bb..4e94ad15df 100644
>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -207,7 +207,6 @@ int iommu_identity_mapping(struct domain *d, 
>> p2m_access_t p2ma,
>>      struct identity_map *map;
>>      struct domain_iommu *hd = dom_iommu(d);
>>  
>> -    ASSERT(pcidevs_locked());
>>      ASSERT(base < end);
>>  
>>      /*
>> @@ -479,8 +478,6 @@ domid_t iommu_alloc_domid(unsigned long *map)
>>      static unsigned int start;
>>      unsigned int idx = find_next_zero_bit(map, UINT16_MAX - DOMID_MASK, 
>> start);
>>  
>> -    ASSERT(pcidevs_locked());
>> -
>>      if ( idx >= UINT16_MAX - DOMID_MASK )
>>          idx = find_first_zero_bit(map, UINT16_MAX - DOMID_MASK);
>>      if ( idx >= UINT16_MAX - DOMID_MASK )
>> @@ -495,8 +492,6 @@ domid_t iommu_alloc_domid(unsigned long *map)
>>  
>>  void iommu_free_domid(domid_t domid, unsigned long *map)
>>  {
>> -    ASSERT(pcidevs_locked());
>> -
>>      if ( domid == DOMID_INVALID )
>>          return;
>>  
>> -- 
>> 2.36.1
>> 



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.