|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 2/7] VMX: Properly handle pi when all the assigned devices are removed
> From: Wu, Feng
> Sent: Tuesday, October 11, 2016 8:58 AM
>
> This patch handles some concern cases when the last assigned device
concern -> corner
> is removed from the domain. In this case we should carefully handle
> pi descriptor and the per-cpu blocking list, to make sure:
> - all the PI descriptor are in the right state when next time a
> devices is assigned to the domain again.
> - No remaining vcpus of the domain in the per-cpu blocking list.
>
> Here we call vmx_pi_list_remove() to remove the vCPU from the blocking list
> if it is on the list. However, this could happen when vmx_vcpu_block() is
> being called, hence we might incorrectly add the vCPU to the blocking list
> while the last devcie is detached from the domain. Consider that the situation
> can only occur when detaching the last device from the domain and it is not
> a frequent operation, so we use domain_pause before that, which is considered
> as an clean and maintainable solution for the situation.
>
> Signed-off-by: Feng Wu <feng.wu@xxxxxxxxx>
> ---
> v5:
> - Remove a no-op wrapper
>
> xen/arch/x86/hvm/vmx/vmx.c | 28 ++++++++++++++++++++++++----
> 1 file changed, 24 insertions(+), 4 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 623d5bc..d210516 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -163,14 +163,12 @@ static void vmx_pi_switch_to(struct vcpu *v)
> pi_clear_sn(pi_desc);
> }
>
> -static void vmx_pi_do_resume(struct vcpu *v)
> +static void vmx_pi_list_remove(struct vcpu *v)
vmx_pi_unblock_vcpu?
> {
> unsigned long flags;
> spinlock_t *pi_blocking_list_lock;
> struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
>
> - ASSERT(!test_bit(_VPF_blocked, &v->pause_flags));
> -
> /*
> * Set 'NV' field back to posted_intr_vector, so the
> * Posted-Interrupts can be delivered to the vCPU when
> @@ -178,12 +176,12 @@ static void vmx_pi_do_resume(struct vcpu *v)
> */
> write_atomic(&pi_desc->nv, posted_intr_vector);
>
> - /* The vCPU is not on any blocking list. */
> pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock;
>
> /* Prevent the compiler from eliminating the local variable.*/
> smp_rmb();
>
> + /* The vCPU is not on any blocking list. */
> if ( pi_blocking_list_lock == NULL )
> return;
>
> @@ -203,6 +201,13 @@ static void vmx_pi_do_resume(struct vcpu *v)
> spin_unlock_irqrestore(pi_blocking_list_lock, flags);
> }
>
> +static void vmx_pi_do_resume(struct vcpu *v)
> +{
> + ASSERT(!test_bit(_VPF_blocked, &v->pause_flags));
> +
> + vmx_pi_list_remove(v);
> +}
> +
> /* This function is called when pcidevs_lock is held */
> void vmx_pi_hooks_assign(struct domain *d)
> {
> @@ -220,14 +225,29 @@ void vmx_pi_hooks_assign(struct domain *d)
> /* This function is called when pcidevs_lock is held */
> void vmx_pi_hooks_deassign(struct domain *d)
> {
> + struct vcpu *v;
> +
> if ( !iommu_intpost || !has_hvm_container_domain(d) )
> return;
>
> ASSERT(d->arch.hvm_domain.vmx.vcpu_block);
>
> + /*
> + * Pausing the domain can make sure the vCPU is not
> + * running and hence calling the hooks simultaneously
and hence 'not' calling
> + * when deassigning the PI hooks and removing the vCPU
> + * from the blocking list.
> + */
> + domain_pause(d);
> +
> d->arch.hvm_domain.vmx.vcpu_block = NULL;
> d->arch.hvm_domain.vmx.pi_do_resume = NULL;
> d->arch.hvm_domain.vmx.pi_switch_from = NULL;
> +
> + for_each_vcpu ( d, v )
> + vmx_pi_list_remove(v);
> +
> + domain_unpause(d);
> }
>
> static int vmx_domain_initialise(struct domain *d)
> --
> 2.1.0
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |