[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH SpectreV1+L1TF v6 1/9] xen/evtchn: block speculative out-of-bound accesses



>>> On 08.02.19 at 14:44, <nmanthey@xxxxxxxxx> wrote:
> @@ -813,6 +817,13 @@ int set_global_virq_handler(struct domain *d, uint32_t 
> virq)
>  
>      if (virq >= NR_VIRQS)
>          return -EINVAL;
> +
> +   /*
> +    * Make sure the guest controlled value virq is bounded even during
> +    * speculative execution.
> +    */
> +    virq = array_index_nospec(virq, ARRAY_SIZE(global_virq_handlers));
> +
>      if (!virq_is_global(virq))
>          return -EINVAL;

Didn't we agree earlier on that this addition is pointless, as the only
caller is the XEN_DOMCTL_set_virq_handler handler, and most
domctl-s (including this one) are excluded from security considerations
due to XSA-77?

> @@ -955,22 +967,22 @@ long evtchn_bind_vcpu(unsigned int port, unsigned int 
> vcpu_id)
>      {
>      case ECS_VIRQ:
>          if ( virq_is_global(chn->u.virq) )
> -            chn->notify_vcpu_id = vcpu_id;
> +            chn->notify_vcpu_id = v->vcpu_id;
>          else
>              rc = -EINVAL;
>          break;
>      case ECS_UNBOUND:
>      case ECS_INTERDOMAIN:
> -        chn->notify_vcpu_id = vcpu_id;
> +        chn->notify_vcpu_id = v->vcpu_id;
>          break;
>      case ECS_PIRQ:
> -        if ( chn->notify_vcpu_id == vcpu_id )
> +        if ( chn->notify_vcpu_id == v->vcpu_id )
>              break;
>          unlink_pirq_port(chn, d->vcpu[chn->notify_vcpu_id]);
> -        chn->notify_vcpu_id = vcpu_id;
> +        chn->notify_vcpu_id = v->vcpu_id;

Right now we understand why all of these changes are done, but
without a comment this is liable to be converted back as an
optimization down the road.

Everything else here looks fine to me now.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.