[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 13/17] x86/VPMU: Add privileged PMU mode



On 02/04/2014 06:31 AM, Jan Beulich wrote:
On 21.01.14 at 20:08, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> wrote:
@@ -152,33 +162,62 @@ int vpmu_do_interrupt(struct cpu_user_regs *regs)
          err = vpmu->arch_vpmu_ops->arch_vpmu_save(v);
          vpmu_reset(vpmu, VPMU_CONTEXT_SAVE | VPMU_CONTEXT_LOADED);
- /* Store appropriate registers in xenpmu_data */
-        if ( is_pv_32bit_domain(current->domain) )
+        if ( !is_hvm_domain(current->domain) )
          {
-            /*
-             * 32-bit dom0 cannot process Xen's addresses (which are 64 bit)
-             * and therefore we treat it the same way as a non-priviledged
-             * PV 32-bit domain.
-             */
-            struct compat_cpu_user_regs *cmp;
-
-            gregs = guest_cpu_user_regs();
-
-            cmp = (struct compat_cpu_user_regs *)
-                    &v->arch.vpmu.xenpmu_data->pmu.r.regs;
-            XLAT_cpu_user_regs(cmp, gregs);
+            uint16_t cs = (current->arch.flags & TF_kernel_mode) ? 0 : 0x3;
The surrounding if checks !hvm, i.e. both PV and PVH can make it
here. But TF_kernel_mode is meaningful for PV only.

As of this patch PVH doesn't work and you won't get into this code path. And the later patch (#16) addresses this. (Although it unnecessary calculates cs in the line above for PVH, only to call hvm_get_segment_register() later. I'll move it to !pvh clause there).


+
+            /* Store appropriate registers in xenpmu_data */
+            if ( is_pv_32bit_domain(current->domain) )
+            {
+                gregs = guest_cpu_user_regs();
+
+                if ( (vpmu_mode & XENPMU_MODE_PRIV) &&
+                     !is_pv_32bit_domain(v->domain) )
+                    memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                           gregs, sizeof(struct cpu_user_regs));
+                else
+                {
+                    /*
+                     * 32-bit dom0 cannot process Xen's addresses (which are
+                     * 64 bit) and therefore we treat it the same way as a
+                     * non-priviledged PV 32-bit domain.
+                     */
+
+                    struct compat_cpu_user_regs *cmp;
+
+                    cmp = (struct compat_cpu_user_regs *)
+                        &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+                    XLAT_cpu_user_regs(cmp, gregs);
+                }
+            }
+            else if ( !is_control_domain(current->domain) &&
+                      !is_idle_vcpu(current) )
+            {
+                /* PV guest */
+                gregs = guest_cpu_user_regs();
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       gregs, sizeof(struct cpu_user_regs));
+            }
+            else
+                memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
+                       regs, sizeof(struct cpu_user_regs));
+
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = cs;
And now you store a NUL selector (i.e. just the RPL bits) into the
output field?
          }
-        else if ( !is_control_domain(current->domain) &&
-                 !is_idle_vcpu(current) )
+        else
          {
-            /* PV guest */
+            /* HVM guest */
+            struct segment_register cs;
+
              gregs = guest_cpu_user_regs();
              memcpy(&v->arch.vpmu.xenpmu_data->pmu.r.regs,
                     gregs, sizeof(struct cpu_user_regs));
+
+            hvm_get_segment_register(current, x86_seg_cs, &cs);
+            gregs = &v->arch.vpmu.xenpmu_data->pmu.r.regs;
+            gregs->cs = cs.attr.fields.dpl;
And here too? If that's intended, a code comment is a must.

This is HVM-only path, PVH or PV don't go here so cs should be valid.

-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.