|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3] viridian: fix the HvFlushVirtualAddress/List hypercall implementation
>>> On 14.02.19 at 13:49, <paul.durrant@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3964,26 +3964,28 @@ static void hvm_s3_resume(struct domain *d)
> }
> }
>
> -static int hvmop_flush_tlb_all(void)
> +bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
> + void *ctxt)
> {
> + static DEFINE_PER_CPU(cpumask_t, flush_cpumask);
> + cpumask_t *mask = &this_cpu(flush_cpumask);
> struct domain *d = current->domain;
> struct vcpu *v;
>
> - if ( !is_hvm_domain(d) )
> - return -EINVAL;
> -
> /* Avoid deadlock if more than one vcpu tries this at the same time. */
> if ( !spin_trylock(&d->hypercall_deadlock_mutex) )
> - return -ERESTART;
> + return false;
>
> /* Pause all other vcpus. */
> for_each_vcpu ( d, v )
> - if ( v != current )
> + if ( v != current && flush_vcpu(ctxt, v) )
> vcpu_pause_nosync(v);
>
> + cpumask_clear(mask);
I'd prefer if this was pulled further down as well, in particular outside the
locked region. With this, which is easy enough to do while committing,
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
Cc-ing Jürgen in the hopes for his R-a-b.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |