[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 5/7] vpci: add SR-IOV support for PVH Dom0


  • To: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 8 May 2026 07:52:04 +0200
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=google header.d=suse.com header.i="@suse.com" header.h="Content-Transfer-Encoding:In-Reply-To:Autocrypt:From:Content-Language:References:Cc:To:Subject:User-Agent:MIME-Version:Date:Message-ID"
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Mykyta Poturai <Mykyta_Poturai@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
  • Delivery-date: Fri, 08 May 2026 05:52:23 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 07.05.2026 22:40, Volodymyr Babchuk wrote:
> Jan Beulich <jbeulich@xxxxxxxx> writes:
>> On 06.05.2026 11:39, Mykyta Poturai wrote:
>>> On 5/4/26 08:37, Jan Beulich wrote:
>>>> On 23.04.2026 12:12, Mykyta Poturai wrote:
>>>>> On 4/21/26 17:43, Jan Beulich wrote:
>>>>>> On 09.04.2026 16:01, Mykyta Poturai wrote:
>>>>>>> From: Stewart Hildebrand <stewart.hildebrand@xxxxxxx>
>>>>>>>
>>>>>>> This code is expected to only be used by privileged domains,
>>>>>>> unprivileged domains should not get access to the SR-IOV capability.
>>>>>>>
>>>>>>> Implement RW handlers for PCI_SRIOV_CTRL register to dynamically
>>>>>>> map/unmap VF BARS. Recalculate BAR sizes before mapping VFs to account
>>>>>>> for possible changes in the system page size register. Also force VFs to
>>>>>>> always use emulated reads for command register, this is needed to
>>>>>>> prevent some drivers accidentally unmapping BARs.
>>>>>>
>>>>>> This apparently refers to the change to vpci_init_header(). Writes are
>>>>>> already intercepted. How would a read lead to accidental BAR unmap? Even
>>>>>> for writes I don't see how a VF driver could accidentally unmap BARs, as
>>>>>> the memory decode bit there is hardwired to 0.
>>>>>>
>>>>>>> Discovery of VFs is
>>>>>>> done by Dom0, which must register them with Xen.
>>>>>>
>>>>>> If we intercept control register writes, why would we still require
>>>>>> Dom0 to report the VFs that appear?
>>>>>>
>>>>>
>>>>> Sorry, I don't understand this question. You specifically requested this
>>>>> to be done this way in V2. Quoting your reply from V2 below.
>>>>>
>>>>>   > Aren't you effectively busy-waiting for these 100ms, by simply
>>>>> returning "true"
>>>>>   > from vpci_process_pending() until the time has passed? This imo is a
>>>>> no-go. You
>>>>>   > want to set a timer and put the vCPU to sleep, to wake it up again
>>>>> when the
>>>>>   > timer has expired. That'll then eliminate the need for the
>>>>> not-so-nice patch 4.
>>>>>
>>>>>   > Question is whether we need to actually go this far (right away). I
>>>>> expect you
>>>>>   > don't mean to hand PFs to DomU-s. As long as we keep them in the 
>>>>> hardware
>>>>>   > domain, can't we trust it to set things up correctly, just like we
>>>>> trust it in
>>>>>   > a number of other aspects?
>>>>
>>>> How's any of this related to the question I raised here, or your reply
>>>> thereto? If we intercept PCI_SRIOV_CTRL, we know when VFs are created.
>>>> Why still demand Dom0 to report them then?
>>>>
>>>
>>> The spec states that VFs can take up to 100ms after the VF_ENABLE bit is 
>>> set to become alive. We discussed in the V2 that it is not acceptable to 
>>> do a required 100ms wait in Xen while blocking a domain. And not doing 
>>> that blocking would require some mechanism to only allow a domain to run 
>>> for precisely 99(or more?)ms. You yourself suggested that we can trust 
>>> the hardware domain with registering VFs if we already trust it with 
>>> other PCI-related stuff. Did you change your mind, or am I completely 
>>> misunderstanding this question?
>>
>> No, I still think that we can trust hwdom enough. Nevertheless we should
>> aim at being independent of it where possible. And I seem to recall that
>> I had also outlined an approach how to avoid spin-waiting for 100ms in
>> the hypervisor.
> 
> I want to clarify: you are telling that Xen should not wait for hwdom to
> report VFs and instead create them by itself. Is this correct?

If that's technically possible, yes.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.