|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] VMX: wbinvd when vmentry under UC
On 25/11/13 16:14, Liu, Jinsong wrote:
> From e2d47e2f75bac6876b7c2eaecfe946966bf27516 Mon Sep 17 00:00:00 2001
> From: Liu Jinsong <jinsong.liu@xxxxxxxxx>
> Date: Tue, 26 Nov 2013 04:53:17 +0800
> Subject: [PATCH] VMX: wbinvd when vmentry under UC
>
> This patch flush cache when vmentry back to UC guest, to prevent
> cache polluted by hypervisor access guest memory during UC mode.
>
> However, wbinvd is a _very_ time consuming operation, so
> 1. wbinvd ... timer has a good possibility to expire while
> irq disabled, it then would be delayed until
> 2. ... vmentry back to guest (and irq enalbed), timer interrupt
> then occurs and drops guest at once;
> 3. drop to hypervisor ... then vmentry and wbinvd again;
>
> This loop will run again and again, until lucky enough wbinvd
> happens not to expire timer and then loop break, usually it would
> occur 10K~60K times, blocking guest 10s~60s.
>
> reprogram timer to avoid dead_like_loop.
>
> Signed-off-by: Liu Jinsong <jinsong.liu@xxxxxxxxx>
> ---
> xen/arch/x86/hvm/vmx/vmx.c | 32 ++++++++++++++++++++++++++++----
> 1 files changed, 28 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 75be62e..4768c9b 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -642,10 +642,6 @@ static void vmx_ctxt_switch_to(struct vcpu *v)
> __invept(INVEPT_SINGLE_CONTEXT, ept_get_eptp(ept_data), 0);
> }
>
> - /* For guest cr0.cd setting, do not use potentially polluted cache */
> - if ( unlikely(v->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
> - wbinvd();
> -
> vmx_restore_guest_msrs(v);
> vmx_restore_dr(v);
> }
> @@ -2967,6 +2963,27 @@ out:
> nvmx_idtv_handling();
> }
>
> +/*
> + * wbinvd is a _very_ time consuming operation, so
> + * 1. wbinvd ... timer has a good possibility to expire while
> + * irq disabled, it then would be delayed until
> + * 2. ... vmentry back to guest (and irq enalbed), timer interrupt
> + * then occurs and drops guest at once;
> + * 3. drop to hypervisor ... then vmentry and wbinvd again;
> + *
> + * This loop will run again and again, until lucky enough wbinvd
> + * happens not to expire timer and then loop break, usually it would
> + * occur 10K~60K times, blocking guest 10s~60s.
> + *
> + * reprogram timer to avoid dead_like_loop.
> + */
> +static inline void uc_wbinvd_and_timer_adjust(void)
> +{
> + reprogram_timer(0);
> + wbinvd();
> + reprogram_timer(NOW() + MILLISECS(1));
Do you have any number of the time delta across the wbinvd() ?
As it currently stands, I do not think it is reasonable to reprogram the
timer like this.
~Andrew
> +}
> +
> void vmx_vmenter_helper(const struct cpu_user_regs *regs)
> {
> struct vcpu *curr = current;
> @@ -2974,6 +2991,13 @@ void vmx_vmenter_helper(const struct cpu_user_regs
> *regs)
> struct hvm_vcpu_asid *p_asid;
> bool_t need_flush;
>
> + /*
> + * In case hypervisor may access hvm guest memory, and then
> + * cache line polluted under UC mode.
> + */
> + if ( unlikely(curr->arch.hvm_vcpu.cache_mode == NO_FILL_CACHE_MODE) )
> + uc_wbinvd_and_timer_adjust();
> +
> if ( !cpu_has_vmx_vpid )
> goto out;
> if ( nestedhvm_vcpu_in_guestmode(curr) )
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |