[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XEN PATCH v2 08/15] x86/vpmu: guard vmx/svm calls with cpu_has_{vmx,svm}


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Sergiy Kibrik <sergiy_kibrik@xxxxxxxx>
  • Date: Mon, 3 Jun 2024 12:02:32 +0300
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tr1UsH59RERn1t6O2Gk4PbDW3lgQNTG2cS6treYQbdE=; b=XB+exh92lYsV6YT7adGkiYc2U6YQoHCsFCYmr+VZMFMFEl/vwmVuooGcieaR67BHYNx5hN+JspVdbCVJKTETWweT8B9lYXNK0ij9Xp7rGuj6LR71h8FFzF5mSEuLJTIetKaRyEv8kCDVYlqUb/Zroo3utNZnJ7muTjpqWKbOjYLjBjfO6+qcEDr9FRpGZZPFJ0iRz+O3C44SclcOqM2J6L2xNVJAZemwPMZwOjhRdpL7o5VQcedIbj1G3m5RJEBwMOXbGNYgxyVYmxBG4hJFibycG2SCmglyFl6llsdBstivLwnsbHYlUbP9WCDB82ljaVt0k3H+yMyVUjrr6n7G+A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RFwoM2KAG/4EOR02PdIIlpS0mlyOw43405iTrZVT2Ma1sF8ySahcP5R90FHWTVEMl7QHkFJlDCgLfzZL4LGSzUNvEBGmGo/8glWXgoXYEwB/ZhK4b6QtZkKaQYb5nIJ4VxoJElAmPYjLspFaV9fUpWBBHj+YlXSwMOJ8DB4yI+A9g2TLOBifqmJ4qVnIGNpvgKvvBqflNWfl+ahvEuqhplrPBCHgK4ta8xZkyQ7GNT/SoP8jEd7mwv7s95tYC/p82x+L9soTcHwcDnICKmI9yoRjkU+R3L55OTienyqQb92Fp4d9qr0lzKeXd9TOWOE+y23MFZSWCPuYwTKmX5iznw==
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 03 Jun 2024 09:02:54 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

16.05.24 14:23, Jan Beulich:
On 15.05.2024 11:14, Sergiy Kibrik wrote:
--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -290,7 +290,7 @@ static int cf_check amd_vpmu_save(struct vcpu *v,  bool 
to_guest)
      context_save(v);
if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) &&
-         is_msr_bitmap_on(vpmu) )
+         is_msr_bitmap_on(vpmu) && cpu_has_svm )
          amd_vpmu_unset_msr_bitmap(v);

Assuming the change in the earlier patch can actually be established to be
safe, along the lines of an earlier comment from Stefano the addition may
want to move earlier in the overall conditionals (here and below). In fact
I wonder whether it wouldn't be neater to have

#define is_svm_vcpu(v) (cpu_has_svm && is_hvm_vcpu(v))

at the top of the file, and then use that throughout to replace is_hvm_vcpu().
Same on the Intel side then, obviously.


sure, will do

@@ -288,7 +288,7 @@ static int cf_check core2_vpmu_save(struct vcpu *v, bool 
to_guest)
/* Unset PMU MSR bitmap to trap lazy load. */
      if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) &&
-         cpu_has_vmx_msr_bitmap )
+         cpu_has_vmx && cpu_has_vmx_msr_bitmap )

Aren't you elsewhere adding IS_ENABLED() to cpu_has_vmx_*, rendering this (and
similar changes further down) redundant?


indeed, I can move this change later in the series & reuse checks provided by cpu_has_vmx_msr_bitmap

  -Sergiy



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.