[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/6] nestedsvm: Disable TscRateMSR


  • To: George Dunlap <george.dunlap@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 22 Feb 2024 12:26:17 +0100
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 22 Feb 2024 11:26:28 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 22.02.2024 12:00, George Dunlap wrote:
> On Thu, Feb 22, 2024 at 5:50 PM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>
>> On 22.02.2024 10:30, George Dunlap wrote:
>>> On Wed, Feb 21, 2024 at 6:52 PM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>>> But then of course Andrew may know of reasons why all of this is done
>>>>>> in calculate_host_policy() in the first place, rather than in HVM
>>>>>> policy calculation.
>>>>>
>>>>> It sounds like maybe you're confusing host_policy with
>>>>> x86_capabilities?  From what I can tell:
>>>>>
>>>>> *  the "basic" cpu_has_X macros resolve to boot_cpu_has(), which
>>>>> resolves to cpu_has(&boot_cpu_data, ...), which is completely
>>>>> independent of the cpu-policy.c:host_cpu_policy
>>>>>
>>>>> * cpu-policy.c:host_cpu_policy only affects what is advertised to
>>>>> guests, via {pv,hvm}_cpu_policy and featureset bits.  Most notably a
>>>>> quick skim doesn't show any mechanism by which host_cpu_policy could
>>>>> affect what features Xen itself decides to use.
>>>>
>>>> I'm not mixing the two, no; the two are still insufficiently disentangled.
>>>> There's really no reason (long term) to have both host policy and
>>>> x86_capabilities. Therefore I'd prefer if new code (including a basically
>>>> fundamental re-write as is going to be needed for nested) to avoid
>>>> needlessly further extending x86_capabilities. Unless of course there's
>>>> something fundamentally wrong with eliminating the redundancy, which
>>>> likely Andrew would be in the best position to point out.
>>>
>>> So I don't know the history of how things got to be the way they are,
>>> nor really much about the code but what I've gathered from skimming
>>> through while creating this patch series.  But from that impression,
>>> the only issue I really see with the current code is the confusing
>>> naming.  The cpufeature.h code has this nice infrastructure to allow
>>> you to, for instance, enable or disable certain bits on the
>>> command-line; and the interface for querying all the different bits of
>>> functionality is all nicely put in one place.  Moving the
>>> svm_feature_flags into x86_capabilities would immediately allow SVM to
>>> take advantage of this infrastructure; it's not clear to me how this
>>> would be "needless".
>>>
>>> Furthermore, it looks to me like host_cpu_policy is used as a starting
>>> point for generating pv_cpu_policy and hvm_cpu_policy, both of which
>>> are only used for guest cpuid generation.  Given that the format of
>>> those policies is fixed, and there's a lot of "copy this bit from the
>>> host policy wholesale", it seems like no matter what, you'd want a
>>> host_cpu_policy.
>>>
>>> And in any case -- all that is kind of moot.  *Right now*,
>>> host_cpu_policy is only used for guest cpuid policy creation; *right
>>> now*, the nested virt features of AMD are handled in the
>>> host_cpu_policy; *right now*, we're advertising to guests bits which
>>> are not properly virtualized; *right now* these bits are actually set
>>> unconditionally, regardless of whether they're even available on the
>>> hardware; *right now*, Xen uses svm_feature_flags to determine its own
>>> use of TscRateMSR; so *right now*, removing this bit from
>>> host_cpu_policy won't prevent Xen from using TscRateMSR itself.
>>>
>>> (Unless my understanding of the code is wrong, in which case I'd
>>> appreciate a correction.)
>>
>> There's nothing wrong afaics, just missing at least one aspect: Did you
>> see all the featureset <-> policy conversions in cpu-policy.c? That (to
>> me at least) clearly is a sign of unnecessary duplication of the same
>> data. This goes as far as seeding the host policy from the raw one, just
>> to then immediately run x86_cpu_featureset_to_policy(), thus overwriting
>> a fair part of what was first taken from the raw policy. That's necessary
>> right now, because setup_{force,clear}_cpu_cap() act on
>> boot_cpu_data.x86_capability[], not the host policy.
>>
>> As to the "needless" further up, it's only as far as moving those bits
>> into x86_capability[] would further duplicate information, rather than
>> (for that piece at least) putting them into the policies right away. But
>> yes, if the goal is to have setup_{force,clear}_cpu_cap() be able to
>> control those bits as well, then going the intermediate step would be
>> unavoidable at this point in time.
> 
> I'm still not sure of what needs to happen to move this forward.
> 
> As I said, I'm not opposed to doing some prep work; but I don't want
> to randomly guess as to what kinds of clean-up needs to be done, only
> to be told it was wrong (either by you when I post it or by Andy
> sometime after it's checked in).
> 
> I could certainly move svm_feature_flags into host_cpu_policy, and
> have cpu_svm_feature_* reference host_cpu_policy instead (after moving
> the nested virt "guest policy" tweaks into hvm_cpu_policy); but as far
> as I can tell, that would be the *very first* instance of Xen using
> host_cpu_policy in that manner.  I'd like more clarity that this is
> the long-term direction that things are going before then.
> 
> If you (plural) don't have time now to refresh your memory / make an
> informed decision about what you want to happen, then please consider
> just taking the patch as it is; it doesn't make future changes any
> harder.

I'd be willing to ack the patch as is if I understood the piece of code
in calculate_host_policy() that is being changed here. My take is that
in a prereq patch that wants adjusting, by moving it wherever applicable.
Without indications to the contrary, it living there is plain wrong, and
the patch here touching that function isn't quite right either, even if
only as a result of the original issue.

If without at least a little bit of input from Andrew we can't move
forward, then that's the way it is - we need to wait. As you likely know
I have dozens of patches in that state ...

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.