[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 07/10] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses
> -----Original Message----- > From: Boris Ostrovsky [mailto:boris.ostrovsky@xxxxxxxxxx] > Sent: 06 November 2016 21:43 > To: xen-devel@xxxxxxxxxxxxx > Cc: jbeulich@xxxxxxxx; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; > Wei Liu <wei.liu2@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Roger > Pau Monne <roger.pau@xxxxxxxxxx>; Boris Ostrovsky > <boris.ostrovsky@xxxxxxxxxx>; Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Subject: [PATCH 07/10] pvh/ioreq: Install handlers for ACPI-related PVH IO > accesses > > No IOREQ server installed for an HVM guest (as indicated > by HVM_PARAM_NR_IOREQ_SERVER_PAGES being set to zero) implies > a PVH guest. These guests need to handle ACPI-related IO > accesses. > > Logic for the handler will be provided by a later patch. > > Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> > --- > CC: Paul Durrant <paul.durrant@xxxxxxxxxx> > --- > tools/libxc/xc_dom_x86.c | 3 +++ > xen/arch/x86/hvm/hvm.c | 13 +++++++++---- > xen/arch/x86/hvm/ioreq.c | 17 +++++++++++++++++ > xen/include/asm-x86/hvm/ioreq.h | 1 + > 4 files changed, 30 insertions(+), 4 deletions(-) > > diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c > index 7fcdee1..0017694 100644 > --- a/tools/libxc/xc_dom_x86.c > +++ b/tools/libxc/xc_dom_x86.c > @@ -649,6 +649,9 @@ static int alloc_magic_pages_hvm(struct > xc_dom_image *dom) > /* Limited to one module. */ > if ( dom->ramdisk_blob ) > start_info_size += sizeof(struct hvm_modlist_entry); > + > + /* No IOREQ server for PVH guests. */ > + xc_hvm_param_set(xch, domid, > HVM_PARAM_NR_IOREQ_SERVER_PAGES, 0); I thought params defaulted to zero... > } > else > { > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c > index 704fd64..6f8439d 100644 > --- a/xen/arch/x86/hvm/hvm.c > +++ b/xen/arch/x86/hvm/hvm.c > @@ -5206,14 +5206,19 @@ static int hvmop_set_param( > { > unsigned int i; > > - if ( a.value == 0 || > - a.value > sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8 ) ...and if they do then you should not need this change. > + if ( a.value > sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8 ) > { > rc = -EINVAL; > break; > } > - for ( i = 0; i < a.value; i++ ) > - set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask); > + > + if ( a.value == 0 ) /* PVH guest */ > + acpi_ioreq_init(d); > + else > + { > + for ( i = 0; i < a.value; i++ ) > + set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask); > + } This looks quite wrong. Initializing the acpi io hander should be done directly in hvm_domain_initialise() IMO and not as an obscure side effect of setting a parameter to its default value. Paul > > break; > } > diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c > index d2245e2..171ea82 100644 > --- a/xen/arch/x86/hvm/ioreq.c > +++ b/xen/arch/x86/hvm/ioreq.c > @@ -1389,6 +1389,23 @@ void hvm_ioreq_init(struct domain *d) > register_portio_handler(d, 0xcf8, 4, hvm_access_cf8); > } > > +static int acpi_ioaccess( > + int dir, unsigned int port, unsigned int bytes, uint32_t *val) > +{ > + return X86EMUL_OKAY; > +} > + > +void acpi_ioreq_init(struct domain *d) > +{ > + /* Online CPU map, see DSDT's PRST region. */ > + register_portio_handler(d, 0xaf00, HVM_MAX_VCPUS/8, acpi_ioaccess); > + > + register_portio_handler(d, ACPI_GPE0_BLK_ADDRESS_V1, > + ACPI_GPE0_BLK_LEN_V1, acpi_ioaccess); > + register_portio_handler(d, ACPI_PM1A_EVT_BLK_ADDRESS_V1, > + ACPI_PM1A_EVT_BLK_LEN, acpi_ioaccess); > +} > + > /* > * Local variables: > * mode: C > diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm- > x86/hvm/ioreq.h > index fbf2c74..e7b7f52 100644 > --- a/xen/include/asm-x86/hvm/ioreq.h > +++ b/xen/include/asm-x86/hvm/ioreq.h > @@ -53,6 +53,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, > ioreq_t *proto_p, > unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool_t buffered); > > void hvm_ioreq_init(struct domain *d); > +void acpi_ioreq_init(struct domain *d); > > #endif /* __ASM_X86_HVM_IOREQ_H__ */ > > -- > 2.7.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |