[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 05/16] x86/monitor: relocate code more appropriately



On 7/12/2016 11:07 AM, Corneliu ZUZU wrote:
On 7/12/2016 10:45 AM, Tian, Kevin wrote:
From: Corneliu ZUZU [mailto:czuzu@xxxxxxxxxxxxxxx]
Sent: Monday, July 11, 2016 2:19 PM
+static inline
+void monitor_ctrlreg_adjust_traps(struct domain *d, unsigned int index)
+{
+    /* For now, VMX only. */
+    ASSERT(cpu_has_vmx);
+
+    /* Other CRs than CR3 are always trapped. */
+    if ( VM_EVENT_X86_CR3 == index )
+        vmx_vm_event_update_cr3_traps(d);
      [Kevin wrote]:

    Please add above into a hvm function instead of directly calling
    vmx in common file. (other arch can have it unimplemented).
    Then you don't need change this common code again later when
    other archs are added

---


This is not common code, it's in arch/x86/monitor.c (?) and currently,
as the above ASSERT indicates, only VMX is supported. If in the future
support for SVM for example will be added, then the hvm move you suggest
must be done (Jan also suggested this).
Or, I only now realize, if you guys prefer doing this now I could add a
vm_event_update_cr3_traps field in hvm_function_table, but BUG() in the
SVM one..

The latter is desired. Instead of BUG, it makes more sense to return
error on an arch which doesn't register the callback.

Thanks
Kevin

Will do.

Thanks,
Zuzu C.

I've decided to discard separating CR3 load-exiting handling (i.e. discard vmx_vm_event_update_cr3_traps) entirely since I find that it's complicated to have to handle the bit from 2 different places (vmx_update_guest_cr and arch_monitor_domctl_event).

Normally such a situation is resolved by counting the number of subscribers to the resource (in this case counting the number of 'entities' that want to CR3 load-exiting enabled - i.e. just as we have a vCPU pause_count to count entities that want the vCPU to be paused), but it's just a single bit of a lot more and I don't think the overhead is worth.

Let me know if you disagree and I'm open to suggestions, if you guys have any.

Thanks,
Corneliu.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.