|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 3/4] nested vmx: optimize for bulk access of virtual VMCS
>>> On 22.01.13 at 13:00, Dongxiao Xu <dongxiao.xu@xxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -30,6 +30,7 @@
>
> static void nvmx_purge_vvmcs(struct vcpu *v);
>
> +#define VMCS_BUF_SIZE 500
The biggest batch I can spot is about 60 elements large, so
why 500?
> @@ -83,6 +90,9 @@ void nvmx_vcpu_destroy(struct vcpu *v)
> list_del(&item->node);
> xfree(item);
> }
> +
> + if ( nvcpu->vvmcx_buf )
> + xfree(nvcpu->vvmcx_buf);
No need for the if() - xfree() copes quite well with NULL pointers.
> @@ -830,6 +840,35 @@ static void vvmcs_to_shadow(void *vvmcs, unsigned int
> field)
> __vmwrite(field, value);
> }
>
> +static void vvmcs_to_shadow_bulk(struct vcpu *v, unsigned int n,
> + const u16 *field)
> +{
> + struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
> + void *vvmcs = nvcpu->nv_vvmcx;
> + u64 *value = nvcpu->vvmcx_buf;
> + unsigned int i;
> +
> + if ( !cpu_has_vmx_vmcs_shadowing )
> + goto fallback;
> +
> + if ( !value || n > VMCS_BUF_SIZE )
And then, if you lower that value, be verbose (at lest in debugging
builds) about the buffer size being exceeded.
> --- a/xen/include/asm-x86/hvm/vcpu.h
> +++ b/xen/include/asm-x86/hvm/vcpu.h
> @@ -100,6 +100,8 @@ struct nestedvcpu {
> */
> bool_t nv_ioport80;
> bool_t nv_ioportED;
> +
> + u64 *vvmcx_buf; /* A temp buffer for data exchange */
VMX-specific field in non-VMX structure? And wouldn't the buffer
anyway more efficiently be per-pCPU instead of per-vCPU?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |