[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 09/11] vpci/msi: add MSI handlers
>>> On 14.09.17 at 13:35, <roger.pau@xxxxxxxxxx> wrote: > On Thu, Sep 14, 2017 at 04:50:10AM -0600, Jan Beulich wrote: >> >>> On 14.09.17 at 12:42, <roger.pau@xxxxxxxxxx> wrote: >> > On Thu, Sep 14, 2017 at 04:19:44AM -0600, Jan Beulich wrote: >> >> >>> On 14.09.17 at 12:08, <roger.pau@xxxxxxxxxx> wrote: >> >> > On Thu, Sep 07, 2017 at 09:29:41AM -0600, Jan Beulich wrote: >> >> >> >>> On 14.08.17 at 16:28, <roger.pau@xxxxxxxxxx> wrote: >> >> >> > +int vpci_msi_arch_enable(struct vpci_arch_msi *arch, struct pci_dev >> > *pdev, >> >> >> > + uint64_t address, uint32_t data, unsigned >> >> >> > int >> > vectors) >> >> >> > +{ >> >> >> > + struct msi_info msi_info = { >> >> >> > + .seg = pdev->seg, >> >> >> > + .bus = pdev->bus, >> >> >> > + .devfn = pdev->devfn, >> >> >> > + .entry_nr = vectors, >> >> >> > + }; >> >> >> > + unsigned int i; >> >> >> > + int rc; >> >> >> > + >> >> >> > + ASSERT(arch->pirq == INVALID_PIRQ); >> >> >> > + >> >> >> > + /* Get a PIRQ. */ >> >> >> > + rc = allocate_and_map_msi_pirq(pdev->domain, -1, &arch->pirq, >> >> >> > + MAP_PIRQ_TYPE_MULTI_MSI, >> >> >> > &msi_info); >> >> >> > + if ( rc ) >> >> >> > + { >> >> >> > + gdprintk(XENLOG_ERR, "%04x:%02x:%02x.%u: failed to map >> >> >> > PIRQ: >> > %d\n", >> >> >> > + pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn), >> >> >> > + PCI_FUNC(pdev->devfn), rc); >> >> >> > + return rc; >> >> >> > + } >> >> >> > + >> >> >> > + for ( i = 0; i < vectors; i++ ) >> >> >> > + { >> >> >> > + xen_domctl_bind_pt_irq_t bind = { >> >> >> > + .machine_irq = arch->pirq + i, >> >> >> > + .irq_type = PT_IRQ_TYPE_MSI, >> >> >> > + .u.msi.gvec = msi_vector(data) + i, >> >> >> >> >> >> Isn't that rather msi_vector(data + i), i.e. wouldn't you better >> >> >> increment data together with i? >> >> > >> >> > That's true, because the vector is fetched from the last 8bits of the >> >> > data, but I find it more confusing (and it requires that the reader >> >> > knows this detail). IMHO I would prefer to leave it as-is. >> >> >> >> No, the problem is the wrap-around case, which your code >> >> doesn't handle. Iirc hardware behaves along the lines of what >> >> I've suggested to change to, with potentially the vector >> >> increment carrying into other parts of the value. Hence you >> >> either need an early check for there not being any wrapping, >> >> or other places may need similar adjustment (in which case it >> >> might be better to really just increment "data" once in the >> >> loop. >> > >> > Oh, so the vector increment carries over to the delivery mode, then I >> > will switch it. >> >> But please double check first that I'm not mis-remembering. > > The PCI spec contains the following about the data field: > > The Multiple Message Enable field (bits 6-4 of the Message Control > register) defines the number of low order message data bits the > function is permitted to modify to generate its system software > allocated vectors. For example, a Multiple Message Enable encoding of > “010” indicates the function has been allocated four vectors and is > permitted to modify message data bits 1 and 0 (a function modifies the > lower message data bits to generate the allocated number of vectors). > If the Multiple Message Enable field is “000”, the function is not > permitted to modify the message data. > > So it seems like the overflow should be limited to the number of > enabled vectors, Ah, right. So no spilling over into the higher bits. > ie: maybe store the vector in a uint8_t and increase > it at every loop? A uint8_t won't help: As the text says, you need to limit the change to the number of bits that are permitted to be altered. I think there's an implication for these bits to be all clear for the first of the vectors, but that's not spelled out, so I'd prefer if we were flexible and allowed to start from a non-zero value (resulting in e.g. a 0b00111101, 0b00111110, 0b00111111, 0b00111100 vector sequence). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |