[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 09/11] vpci/msi: add MSI handlers



On Thu, Sep 14, 2017 at 04:50:10AM -0600, Jan Beulich wrote:
> >>> On 14.09.17 at 12:42, <roger.pau@xxxxxxxxxx> wrote:
> > On Thu, Sep 14, 2017 at 04:19:44AM -0600, Jan Beulich wrote:
> >> >>> On 14.09.17 at 12:08, <roger.pau@xxxxxxxxxx> wrote:
> >> > On Thu, Sep 07, 2017 at 09:29:41AM -0600, Jan Beulich wrote:
> >> >> >>> On 14.08.17 at 16:28, <roger.pau@xxxxxxxxxx> wrote:
> >> >> > +int vpci_msi_arch_enable(struct vpci_arch_msi *arch, struct pci_dev 
> > *pdev,
> >> >> > +                         uint64_t address, uint32_t data, unsigned 
> >> >> > int 
> > vectors)
> >> >> > +{
> >> >> > +    struct msi_info msi_info = {
> >> >> > +        .seg = pdev->seg,
> >> >> > +        .bus = pdev->bus,
> >> >> > +        .devfn = pdev->devfn,
> >> >> > +        .entry_nr = vectors,
> >> >> > +    };
> >> >> > +    unsigned int i;
> >> >> > +    int rc;
> >> >> > +
> >> >> > +    ASSERT(arch->pirq == INVALID_PIRQ);
> >> >> > +
> >> >> > +    /* Get a PIRQ. */
> >> >> > +    rc = allocate_and_map_msi_pirq(pdev->domain, -1, &arch->pirq,
> >> >> > +                                   MAP_PIRQ_TYPE_MULTI_MSI, 
> >> >> > &msi_info);
> >> >> > +    if ( rc )
> >> >> > +    {
> >> >> > +        gdprintk(XENLOG_ERR, "%04x:%02x:%02x.%u: failed to map PIRQ: 
> > %d\n",
> >> >> > +                 pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
> >> >> > +                 PCI_FUNC(pdev->devfn), rc);
> >> >> > +        return rc;
> >> >> > +    }
> >> >> > +
> >> >> > +    for ( i = 0; i < vectors; i++ )
> >> >> > +    {
> >> >> > +        xen_domctl_bind_pt_irq_t bind = {
> >> >> > +            .machine_irq = arch->pirq + i,
> >> >> > +            .irq_type = PT_IRQ_TYPE_MSI,
> >> >> > +            .u.msi.gvec = msi_vector(data) + i,
> >> >> 
> >> >> Isn't that rather msi_vector(data + i), i.e. wouldn't you better
> >> >> increment data together with i?
> >> > 
> >> > That's true, because the vector is fetched from the last 8bits of the
> >> > data, but I find it more confusing (and it requires that the reader
> >> > knows this detail). IMHO I would prefer to leave it as-is.
> >> 
> >> No, the problem is the wrap-around case, which your code
> >> doesn't handle. Iirc hardware behaves along the lines of what
> >> I've suggested to change to, with potentially the vector
> >> increment carrying into other parts of the value. Hence you
> >> either need an early check for there not being any wrapping,
> >> or other places may need similar adjustment (in which case it
> >> might be better to really just increment "data" once in the
> >> loop.
> > 
> > Oh, so the vector increment carries over to the delivery mode, then I
> > will switch it.
> 
> But please double check first that I'm not mis-remembering.

The PCI spec contains the following about the data field:

The Multiple Message Enable field (bits 6-4 of the Message Control
register) defines the number of low order message data bits the
function is permitted to modify to generate its system software
allocated vectors. For example, a Multiple Message Enable encoding of
“010” indicates the function has been allocated four vectors and is
permitted to modify message data bits 1 and 0 (a function modifies the
lower message data bits to generate the allocated number of vectors).
If the Multiple Message Enable field is “000”, the function is not
permitted to modify the message data.

So it seems like the overflow should be limited to the number of
enabled vectors, ie: maybe store the vector in a uint8_t and increase
it at every loop?

I don't seem to be able to find any specific mention to what happens
when the vector part of the data register overflows. From the except
above it seems like it should never modify any other bits apart from
the low 8bit vector ones.

The Intel SDM mentions that the vector must be in the range 0x10-0xfe,
in which case the resulting vector (whether we wrap or not) is not
going to be valid anyway.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.