[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC for-4.20 v1 1/1] x86/hvm: Introduce Xen-wide ASID Allocator


  • To: Vaishali Thakkar <vaishali.thakkar@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 27 Jun 2024 16:47:03 +0200
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: andrew.cooper3@xxxxxxxxxx, roger.pau@xxxxxxxxxx, george.dunlap@xxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 27 Jun 2024 14:47:24 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 27.06.2024 15:41, Vaishali Thakkar wrote:
> On 6/13/24 1:04 PM, Jan Beulich wrote:
>> On 24.05.2024 14:31, Vaishali Thakkar wrote:
>>> -void hvm_asid_flush_core(void)
>>> +void hvm_asid_flush_all(void)
>>>   {
>>> -    struct hvm_asid_data *data = &this_cpu(hvm_asid_data);
>>> +    struct hvm_asid_data *data = &asid_data;
>>>   
>>> -    if ( data->disabled )
>>> +    if ( data->disabled)
>>>           return;
>>>   
>>> -    if ( likely(++data->core_asid_generation != 0) )
>>> +    if ( likely(++data->asid_generation != 0) )
>>>           return;
>>>   
>>>       /*
>>> -     * ASID generations are 64 bit.  Overflow of generations never happens.
>>> -     * For safety, we simply disable ASIDs, so correctness is established; 
>>> it
>>> -     * only runs a bit slower.
>>> -     */
>>> +    * ASID generations are 64 bit.  Overflow of generations never happens.
>>> +    * For safety, we simply disable ASIDs, so correctness is established; 
>>> it
>>> +    * only runs a bit slower.
>>> +    */
>>
>> Please don't screw up indentation; this comment was well-formed before. What
>> I question is whether, with the ultimate purpose in mind, the comment 
>> actually
>> will continue to be correct. We can't simply disable ASIDs when we have SEV
>> VMs running, can we?
> 
> You're right about SEV VMs. But wouldn't we still want to have a way to 
> disable ASIDs when there are no SEV VMs are running?

Possibly. Yet that still would render this comment stale in the common case,
as the way it's written it suggests simply disabling ASIDs on the fly is an
okay thing to do.

>>> +        c = &cpu_data[cpu];
>>> +        /* Check for erratum #170, and leave ASIDs disabled if it's 
>>> present. */
>>> +        if ( !cpu_has_amd_erratum(c, AMD_ERRATUM_170) )
>>> +            nasids += cpuid_ebx(0x8000000aU);
>>
>> Why += ? Don't you mean to establish the minimum across all CPUs? Which would
>> be assuming there might be an asymmetry, which generally we assume there
>> isn't.
>> And if you invoke CPUID, you'll need to do so on the very CPU, not many times
>> in a row on the BSP.
> 
> Hmm, I'm not sure if I understand your point completely. Just to clarify, 
> do you mean even if it's assumed that asymmetry is not there, we should 
> find and establish min ASID count across all online CPUs and ensure that 
> CPUID instruction is executed on the respective CPU?

No, I mean that
- if we assume there may be asymmetry, CPUID will need invoking once on
  every CPU (including ones later being hot-onlined),
- if we assume no asymmetry, there's no need for accumulation.

>>> --- a/xen/arch/x86/mm/hap/hap.c
>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>> @@ -739,13 +739,13 @@ static bool cf_check flush_tlb(const unsigned long 
>>> *vcpu_bitmap)
>>>           if ( !flush_vcpu(v, vcpu_bitmap) )
>>>               continue;
>>>   
>>> -        hvm_asid_flush_vcpu(v);
>>> -
>>>           cpu = read_atomic(&v->dirty_cpu);
>>>           if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) && v->is_running )
>>>               __cpumask_set_cpu(cpu, mask);
>>>       }
>>>   
>>> +    hvm_asid_flush_domain(d);
>>
>> Hmm, that's potentially much more flushing than is needed here. There
>> surely wants to be a wait to flush at a granularity smaller than the
>> entire domain. (Likely applies elsewhere as well.)
> 
> I see, does it mean we still need a way to flush at the vcpu level in the
> case of HAP?

For correctness it's not really "need", but for performance I'm pretty sure
it's going to be "want".

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.