[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 15/17] vmx: VT-d posted-interrupt core logic handling



Oh, one more thing.

On Mon, 2015-10-12 at 16:55 +0800, Feng Wu wrote: 

> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index e448b31..cad70b4 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c

> +void vmx_vcpu_block_cancel(struct vcpu *v)
> +{
> +    unsigned long flags;
> +
> +    if ( !has_arch_pdevs(v->domain) )
> +        return;
> +
> +    spin_lock_irqsave(&v->arch.hvm_vmx.pi_lock, flags);
> +
> +    if ( !test_bit(_VPF_blocked, &v->pause_flags) )
> +    {
> +        struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
> +        unsigned int pi_block_cpu;
> +
> +        /* the vCPU is not on any blocking list. */
> +        pi_block_cpu = v->arch.hvm_vmx.pi_block_cpu;
> +        if ( pi_block_cpu == NR_CPUS )
> +            goto out;
> +
> +        /*
> +         * Set 'NV' field back to posted_intr_vector, so the
> +         * Posted-Interrupts can be delivered to the vCPU by
> +         * VT-d HW after it is scheduled to run.
> +         */
> +        write_atomic(&pi_desc->nv, posted_intr_vector);
> +
> +        spin_lock(&per_cpu(pi_blocked_vcpu_lock, pi_block_cpu));
> +
> +        /*
> +         * v->arch.hvm_vmx.pi_block_cpu == NR_CPUS here means the
> vCPU was
> +         * removed from the blocking list while we are acquiring the
> lock.
> +         */
> +        if ( v->arch.hvm_vmx.pi_block_cpu == NR_CPUS )
> +        {
> +            spin_unlock(&per_cpu(pi_blocked_vcpu_lock,
> pi_block_cpu));
> +            goto out;
> +        }
> +
> +        list_del(&v->arch.hvm_vmx.pi_blocked_vcpu_list);
> +        v->arch.hvm_vmx.pi_block_cpu = NR_CPUS;
> +        spin_unlock(&per_cpu(pi_blocked_vcpu_lock, pi_block_cpu));
> +    }
> +
> +out:
> +    spin_unlock_irqrestore(&v->arch.hvm_vmx.pi_lock, flags);
> +}
> +

> +void vmx_vcpu_wake_prepare(struct vcpu *v)
> +{
>
This function looks exactly identical to vmx_vcpu_block_cancel(), with
the only exception of...

> +    unsigned long flags;
> +
> +    if ( !has_arch_pdevs(v->domain) )
> +        return;
> +
> +    spin_lock_irqsave(&v->arch.hvm_vmx.pi_lock, flags);
> +
> +    if ( !test_bit(_VPF_blocked, &v->pause_flags) )
> +    {
> +        struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
> +        unsigned int pi_block_cpu;
> +
> +        /* the vCPU is not on any blocking list. */
> +        pi_block_cpu = v->arch.hvm_vmx.pi_block_cpu;
> +        if ( pi_block_cpu == NR_CPUS )
> +            goto out;
> +
> +        /*
> +         * We cannot set 'SN' here since we don't change 'SN' during
> lazy
> +         * context switch, if we set 'SN' here, we may end up in the
> situation
> +         * that the vCPU is running with 'SN' set.
> +         */
> +
>
... this comment (which is present here and not there). Is that the
case, or am I overlooking something?

In any case, there seems to be a lot of common code between the twos.

If we are to keep this design, I think common code should be factored.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.