[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v9 1/3] ioreq-server: add support for multiple servers



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 22 May 2014 10:18
> To: Paul Durrant
> Cc: Ian Jackson; Stefano Stabellini; xen-devel@xxxxxxxxxxxxx
> Subject: Re: [PATCH v9 1/3] ioreq-server: add support for multiple servers
> 
> >>> On 20.05.14 at 18:08, <paul.durrant@xxxxxxxxxx> wrote:
> > +static struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain
> *d,
> > +                                                        ioreq_t *p)
> > +{
> > +#define CF8_BDF(cf8)     (((cf8) & 0x00ffff00) >> 8)
> > +#define CF8_ADDR_LO(cf8) ((cf8) & 0x000000fc)
> > +#define CF8_ADDR_HI(cf8) (((cf8) & 0x0f000000) >> 16)
> > +#define CF8_ENABLED(cf8) (!!((cf8) & 0x80000000))
> > +
> > +    struct hvm_ioreq_server *s;
> > +    uint32_t cf8;
> > +    uint8_t type;
> > +    uint64_t addr;
> > +
> > +    if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
> > +        return NULL;
> > +
> > +    if ( list_is_singular(&d->arch.hvm_domain.ioreq_server.list) ||
> > +         (p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO) )
> > +        return d->arch.hvm_domain.default_ioreq_server;
> > +
> > +    cf8 = d->arch.hvm_domain.pci_cf8;
> > +
> > +    if ( p->type == IOREQ_TYPE_PIO &&
> > +         (p->addr & ~3) == 0xcfc &&
> > +         CF8_ENABLED(cf8) )
> > +    {
> > +        uint32_t sbdf;
> > +
> > +        /* PCI config data cycle */
> > +
> > +        sbdf = HVMOP_PCI_SBDF(0,
> > +                              PCI_BUS(CF8_BDF(cf8)),
> > +                              PCI_SLOT(CF8_BDF(cf8)),
> > +                              PCI_FUNC(CF8_BDF(cf8)));
> > +
> > +        type = IOREQ_TYPE_PCI_CONFIG;
> > +        addr = ((uint64_t)sbdf << 32) |
> > +               CF8_ADDR_HI(cf8) |
> > +               CF8_ADDR_LO(cf8) |
> > +               (p->addr & 3);
> > +    }
> > +    else
> > +    {
> > +        type = p->type;
> > +        addr = p->addr;
> > +    }
> > +
> > +    list_for_each_entry ( s,
> > +                          &d->arch.hvm_domain.ioreq_server.list,
> > +                          list_entry )
> > +    {
> > +        struct rangeset *r;
> > +
> > +        if ( s == d->arch.hvm_domain.default_ioreq_server )
> > +            continue;
> > +
> > +        BUILD_BUG_ON(IOREQ_TYPE_PIO != HVMOP_IO_RANGE_PORT);
> > +        BUILD_BUG_ON(IOREQ_TYPE_COPY !=
> HVMOP_IO_RANGE_MEMORY);
> > +        BUILD_BUG_ON(IOREQ_TYPE_PCI_CONFIG !=
> HVMOP_IO_RANGE_PCI);
> > +        r = s->range[type];
> > +
> > +        switch ( type )
> > +        {
> > +        case IOREQ_TYPE_PIO: {
> > +            unsigned long end = addr + p->size - 1;
> > +
> > +            if ( rangeset_contains_range(r, addr, end) )
> > +                return s;
> > +
> > +            break;
> > +        }
> > +        case IOREQ_TYPE_COPY: {
> > +            unsigned long end = addr + (p->size * p->count) - 1;
> > +
> > +            if ( rangeset_contains_range(r, addr, end) )
> > +                return s;
> > +
> > +            break;
> > +        }
> 
> I was about to say "coding style" again (due to the misplaced opening
> braces), but then I started wondering whether both "end" variables
> are warranted here in the first place. And if they are, I would think
> you might better declare just one instance in the scope of the switch(),
> avoiding the need for the braces.
> 

Ok. Will do.

> > +struct xen_hvm_get_ioreq_server_info {
> > +    domid_t domid;               /* IN - domain to be serviced */
> > +    ioservid_t id;               /* IN - server id */
> > +    evtchn_port_t bufioreq_port; /* OUT - buffered ioreq port */
> > +    xen_pfn_t ioreq_pfn;         /* OUT - sync ioreq pfn */
> > +    xen_pfn_t bufioreq_pfn;      /* OUT - buffered ioreq pfn */
> > +};
> 
> I'm sorry for not having paid attention to this earlier, but HVM ops
> should have all their interface structures laid out identically for
> 64- and 32-bit guests - see other uses of uint64_aligned_t in
> this header.
> 

Ok. I'll fix.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.