|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v8 02/13] vpci: use per-domain PCI lock to protect vpci structure
Jan,
Jan Beulich <jbeulich@xxxxxxxx> writes:
> On 27.07.2023 12:31, Volodymyr Babchuk wrote:
>>
>> Hi Jan
>>
>> Jan Beulich <jbeulich@xxxxxxxx> writes:
>>
>>> On 27.07.2023 02:56, Volodymyr Babchuk wrote:
>>>> Hi Roger,
>>>>
>>>> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
>>>>
>>>>> On Wed, Jul 26, 2023 at 01:17:58AM +0000, Volodymyr Babchuk wrote:
>>>>>>
>>>>>> Hi Roger,
>>>>>>
>>>>>> Roger Pau Monné <roger.pau@xxxxxxxxxx> writes:
>>>>>>
>>>>>>> On Thu, Jul 20, 2023 at 12:32:31AM +0000, Volodymyr Babchuk wrote:
>>>>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
>>>>>>>> @@ -498,6 +537,7 @@ void vpci_write(pci_sbdf_t sbdf, unsigned int reg,
>>>>>>>> unsigned int size,
>>>>>>>> ASSERT(data_offset < size);
>>>>>>>> }
>>>>>>>> spin_unlock(&pdev->vpci->lock);
>>>>>>>> + unlock_locks(d);
>>>>>>>
>>>>>>> There's one issue here, some handlers will cal pcidevs_lock(), which
>>>>>>> will result in a lock over inversion, as in the previous patch we
>>>>>>> agreed that the locking order was pcidevs_lock first, d->pci_lock
>>>>>>> after.
>>>>>>>
>>>>>>> For example the MSI control_write() handler will call
>>>>>>> vpci_msi_arch_enable() which takes the pcidevs lock. I think I will
>>>>>>> have to look into using a dedicated lock for MSI related handling, as
>>>>>>> that's the only place where I think we have this pattern of taking the
>>>>>>> pcidevs_lock after the d->pci_lock.
>>>>>>
>>>>>> I'll mention this in the commit message. Is there something else that I
>>>>>> should do right now?
>>>>>
>>>>> Well, I don't think we want to commit this as-is with a known lock
>>>>> inversion.
>>>>>
>>>>> The functions that require the pcidevs lock are:
>>>>>
>>>>> pt_irq_{create,destroy}_bind()
>>>>> unmap_domain_pirq()
>>>>>
>>>>> AFAICT those functions require the lock in order to assert that the
>>>>> underlying device doesn't go away, as they do also use d->event_lock
>>>>> in order to get exclusive access to the data fields. Please double
>>>>> check that I'm not mistaken.
>>>>
>>>> You are right, all three function does not access any of PCI state
>>>> directly. However...
>>>>
>>>>> If that's accurate you will have to check the call tree that spawns
>>>>> from those functions in order to modify the asserts to check for
>>>>> either the pcidevs or the per-domain pci_list lock being taken.
>>>>
>>>> ... I checked calls for PT_IRQ_TYPE_MSI case, there is only call that
>>>> bothers me: hvm_pi_update_irte(), which calls IO-MMU code via
>>>> vmx_pi_update_irte():
>>>>
>>>> amd_iommu_msi_msg_update_ire() or msi_msg_write_remap_rte().
>>>>
>>>> Both functions read basic pdev fields like sbfd or type. I see no
>>>> problem there, as values of those fields are not supposed to be changed.
>>>
>>> But whether fields are basic or will never change doesn't matter when
>>> the pdev struct itself suddenly disappears.
>>
>> This is not a problem, as it is expected that d->pci_lock is being held,
>> so pdev structure will not disappear. I am trying to answer another
>> question: is d->pci_lock enough or pcidevs_lock is also should required?
>
> To answer such questions, may I ask that you first firmly write down
> (and submit) what each of the locks guards?
I can do this for a newly introduced lock. So domain->pci_lock guards:
1. domain->pcidevs_list. This means that PCI devices can't be added to
or removed from a domain, when the lock is taken in read mode. As a
byproduct, any pdev assigned to a domain can't be deleted because we
need to deassign it first. To modify domain->pcidevs_list we need to
hold both d->pci_lock in write mode and pcidevs_lock.
2. Presence of pdev->vpci struct for any pdev assigned to a domain. The
structure itself is locked by pdev->vpci->lock. But to add/remove
pdev->vpci itself we need to hold d->pci_lock in the write mode.
As for pcidevs_lock, AFAIK, there is no strictly written rules, what is
exactly is protected by this lock.
--
WBR, Volodymyr
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |