[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/vMSI-X: avoid missing first unmask of vectors



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 21 April 2016 12:44
> To: Paul Durrant
> Cc: Andrew Cooper; Wei Liu; xen-devel; Keir (Xen.org)
> Subject: RE: [PATCH] x86/vMSI-X: avoid missing first unmask of vectors
> 
> >>> On 21.04.16 at 13:33, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> Sent: 21 April 2016 10:38
> >> --- a/xen/arch/x86/hvm/vmsi.c
> >> +++ b/xen/arch/x86/hvm/vmsi.c
> >> @@ -341,7 +352,21 @@ static int msixtbl_range(struct vcpu *v,
> >>      desc = msixtbl_addr_to_desc(msixtbl_find_entry(v, addr), addr);
> >>      rcu_read_unlock(&msixtbl_rcu_lock);
> >>
> >> -    return !!desc;
> >> +    if ( desc )
> >> +        return 1;
> >> +
> >> +    if ( (addr & (PCI_MSIX_ENTRY_SIZE - 1)) ==
> >> +         PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET )
> >> +    {
> >> +        const ioreq_t *r = &v->arch.hvm_vcpu.hvm_io.io_req;
> >> +
> >
> > If you need to start digging into the ioreq here then I think it would be
> > better to have vmsi register a full hvm_io_ops rather than hvm_mmio_ops.
> That
> > way it gets the ioreq passed the accept method without any need to dig.
> 
> I can certainly try and see how this works out, but it's relatively
> clear that it'll make the patch bigger and backporting more difficult.
> So maybe this would better be left as a subsequent cleanup
> exercise?
>

True, it would make backporting more tricky so maybe a follow-up patch would be 
the better approach. In which case...

Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.