[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86: properly handle MSI-X unmask operation from guests



>>> On 12.11.13 at 09:55, "Wu, Feng" <feng.wu@xxxxxxxxx> wrote:
>> -----Original Message-----
>> From: Xu, Dongxiao
>> Sent: Tuesday, November 12, 2013 12:16 AM
>> To: Jan Beulich; Wu, Feng
>> Cc: xen-devel; Zhang, Xiantao
>> Subject: RE: [Xen-devel] [PATCH] x86: properly handle MSI-X unmask operation
>> from guests
>> 
>> > -----Original Message-----
>> > From: xen-devel-bounces@xxxxxxxxxxxxx 
>> > [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Jan Beulich
>> > Sent: Monday, November 11, 2013 9:19 PM
>> > To: Wu, Feng
>> > Cc: xen-devel; Zhang, Xiantao
>> > Subject: Re: [Xen-devel] [PATCH] x86: properly handle MSI-X unmask
>> operation
>> > from guests
>> >
>> > >>> On 11.11.13 at 13:33, "Wu, Feng" <feng.wu@xxxxxxxxx> wrote:
>> > > --- a/tools/libxc/xc_domain.c
>> > > +++ b/tools/libxc/xc_domain.c
>> > > @@ -1342,6 +1342,8 @@ int xc_domain_update_msi_irq(
>> > >      uint32_t gvec,
>> > >      uint32_t pirq,
>> > >      uint32_t gflags,
>> > > +    uint32_t vector_ctrl,
>> > > +    int nr_entry,
>> > >      uint64_t gtable)
>> > >  {
>> >
>> > With there being out of tree consumers (like upstream qemu), you
>> > can't just add two parameters here.
> 
> So I will add the needed changes in upstream qemu as what I did for qemu-xen 
> and qemu-traditional

That wouldn't help - you'll be unable to modify what's already
released. Changes like this _must_ be done such that existing
consumers remain unaffected.

But anyway - the primary goal here ought to be to find a solution
not needing changes to the interfaces in the first place.

>> > > +int msixtbl_unmask(struct vcpu *v, unsigned long table_base,
>> > > +                         unsigned int nr_entry)
>> > > +{
>> > > +    unsigned long ctrl_address;
>> > > +
>> > > +    ctrl_address = table_base + nr_entry * PCI_MSIX_ENTRY_SIZE +
>> > > +                   PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET;
>> > > +
>> > > +    if (msixtbl_write(v, ctrl_address, 4, 0) != X86EMUL_OKAY )
>> >
>> > So once again you clear the mask bit with no consideration
>> > whatsoever as to the state Xen want the mask bit to be in. Did
>> > you not read through the history of all these MSI-X related
>> > changes? Otherwise you should have known that it is precisely
>> > this which we must not allow.
> 
> There already have been some handlings in misxtbl_write() as follows:
> 
>     /*
>      * Do not allow guest to modify MSI-X control bit if it is masked
>      * by Xen. We'll only handle the case where Xen thinks that
>      * bit is unmasked, but hardware has silently masked the bit
>      * (in case of SR-IOV VF reset, etc). On the other hand, if Xen
>      * thinks that the bit is masked, but it's really not,
>      * we log a warning.
>      */
>     if ( msi_desc->msi_attrib.masked )
>     {
>         if ( !(orig & PCI_MSIX_VECTOR_BITMASK) )
>             printk(XENLOG_WARNING "MSI-X control bit is unmasked when"
>                    " it is expected to be masked [%04x:%02x:%02x.%u]\n",
>                    entry->pdev->seg, entry->pdev->bus,
>                    PCI_SLOT(entry->pdev->devfn),
>                    PCI_FUNC(entry->pdev->devfn));
> 
>         goto unlock;
>     }
> 
> So here the calling to misxtbl_write() to unmaks msi-x will follow the above 
> check. 
> I am not sure if this is what you concerned. Please correct me if my 
> understanding
> is not what you expected.

You quote but continue to appear to neglect the condition: The
unmasking is _not_ being done when the hypervisor wants the
interrupt masked.

> Jan, If you prefer handling this issue totally inside hypervisor, I will try 
> to find a proper solution soon. 
> In that case, we should not need to modify code of QEMU.

As said - yes, please, if at all possible.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.