[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PVH]: Help: msi.c



On Mon, 17 Dec 2012, Jan Beulich wrote:
> >>> On 17.12.12 at 13:42, Stefano Stabellini 
> >>> <stefano.stabellini@xxxxxxxxxxxxx>
> wrote:
> > On Fri, 14 Dec 2012, Mukesh Rathor wrote:
> >> On Thu, 13 Dec 2012 14:25:16 +0000
> >> Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> wrote:
> >> 
> >> > On Thu, 13 Dec 2012, Jan Beulich wrote:
> >> > > >>> On 13.12.12 at 13:19, Stefano Stabellini
> >> > > >>> <stefano.stabellini@xxxxxxxxxxxxx>
> >> > > wrote:
> >> > > > On Thu, 13 Dec 2012, Jan Beulich wrote:
> >> > > >> >>> On 13.12.12 at 02:43, Mukesh Rathor
> >> > > >> >>> <mukesh.rathor@xxxxxxxxxx> wrote:
> >> > 
> >> > Actually I think that you might be right: just looking at the code it
> >> > seems that the mask bits get written to the table once as part of the
> >> > initialization process:
> >> > 
> >> > pci_enable_msix -> msix_capability_init -> msix_program_entries
> >> > 
> >> > Unfortunately msix_program_entries is called few lines after
> >> > arch_setup_msi_irqs, where we call PHYSDEVOP_map_pirq to map the MSI
> >> > as a pirq.
> >> > However after that is done, all the masking/unmask is done via
> >> > irq_mask that we handle properly masking/unmasking the corresponding
> >> > event channels.
> >> > 
> >> > 
> >> > Possible solutions on top of my head:
> >> > 
> >> > - in msix_program_entries instead of writing to the table directly
> >> > (__msix_mask_irq), call desc->irq_data.chip->irq_mask(); 
> >> > 
> >> > - replace msix_program_entries with arch_msix_program_entries, but it
> >> > would probably be unpopular.
> >> 
> >> 
> >> Can you or Jan or somebody please take that over? I can focus on other
> >> PVH things then and try to get a patch in asap.
> > 
> > The following patch moves the MSI-X masking before arch_setup_msi_irqs,
> > that is when Linux calls PHYSDEVOP_map_pirq that is the hypercall that
> > causes Xen to execute msix_capability_init.
> 
> And in what way would that help?

I was working under the assumption that before the call to
msix_capability_init (in particular before
rangeset_add_range(mmio_ro_ranges, dev->msix_table...) in Xen, the table
is actually writeable by the guest.

If that is the case, then this scheme should work.
If it is not the case, this patch is wrong.



> > I don't have access to a machine with an MSI-X device right now so I
> > have only tested the appended patch with MSI.
> > 
> > ---
> > 
> > 
> > 
> > diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
> > index a825d78..ef73e80 100644
> > --- a/drivers/pci/msi.c
> > +++ b/drivers/pci/msi.c
> > @@ -652,11 +652,22 @@ static void msix_program_entries(struct pci_dev *dev,
> >     int i = 0;
> >  
> >     list_for_each_entry(entry, &dev->msi_list, list) {
> > +           entries[i].vector = entry->irq;
> > +           irq_set_msi_desc(entry->irq, entry);
> > +           i++;
> > +   }
> > +}
> > +
> > +static void msix_mask_entries(struct pci_dev *dev,
> > +                                   struct msix_entry *entries)
> > +{
> > +   struct msi_desc *entry;
> > +   int i = 0;
> > +
> > +   list_for_each_entry(entry, &dev->msi_list, list) {
> >             int offset = entries[i].entry * PCI_MSIX_ENTRY_SIZE +
> >                                             PCI_MSIX_ENTRY_VECTOR_CTRL;
> >  
> > -           entries[i].vector = entry->irq;
> > -           irq_set_msi_desc(entry->irq, entry);
> >             entry->masked = readl(entry->mask_base + offset);
> >             msix_mask_irq(entry, 1);
> >             i++;
> > @@ -696,6 +707,8 @@ static int msix_capability_init(struct pci_dev *dev,
> >     if (ret)
> >             return ret;
> >  
> > +   msix_mask_entries(dev, entries);
> > +
> >     ret = arch_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSIX);
> >     if (ret)
> >             goto error;
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.