[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 03/11] x86/mmcfg: add handlers for the PVH Dom0 MMCFG areas
>>> On 14.08.17 at 16:28, <roger.pau@xxxxxxxxxx> wrote: > --- a/xen/arch/x86/hvm/dom0_build.c > +++ b/xen/arch/x86/hvm/dom0_build.c > @@ -38,6 +38,8 @@ > #include <public/hvm/hvm_info_table.h> > #include <public/hvm/hvm_vcpu.h> > > +#include "../x86_64/mmconfig.h" Please try to avoid such includes - move whatever is needed to a header in asm-x86/ instead. > @@ -391,6 +391,187 @@ void register_vpci_portio_handler(struct domain *d) > handler->ops = &vpci_portio_ops; > } > > +struct hvm_mmcfg { > + struct list_head next; > + paddr_t addr; > + unsigned int size; > + uint16_t segment; > + int8_t bus; uint8_t > +}; > + > +/* Handlers to trap PCI MMCFG config accesses. */ > +static const struct hvm_mmcfg *vpci_mmcfg_find(struct domain *d, paddr_t > addr) const struct domain * > +static int vpci_mmcfg_accept(struct vcpu *v, unsigned long addr) > +{ > + struct domain *d = v->domain; const? > +int __hwdom_init register_vpci_mmcfg_handler(struct domain *d, paddr_t addr, > + unsigned int start_bus, > + unsigned int end_bus, > + unsigned int seg) > +{ > + struct hvm_mmcfg *mmcfg; > + > + ASSERT(is_hardware_domain(d)); > + > + vpci_wlock(d); > + if ( vpci_mmcfg_find(d, addr) ) > + { > + vpci_wunlock(d); > + return -EEXIST; > + } > + > + mmcfg = xmalloc(struct hvm_mmcfg); > + if ( !mmcfg ) > + { > + vpci_wunlock(d); > + return -ENOMEM; > + } > + > + if ( list_empty(&d->arch.hvm_domain.mmcfg_regions) ) > + register_mmio_handler(d, &vpci_mmcfg_ops); > + > + mmcfg->addr = addr + (start_bus << 20); > + mmcfg->bus = start_bus; I think your structure field would better be named start_bus too. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |