[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 07/10] pvh/ioreq: Install handlers for ACPI-related PVH IO accesses



> -----Original Message-----
> From: Boris Ostrovsky [mailto:boris.ostrovsky@xxxxxxxxxx]
> Sent: 07 November 2016 14:01
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; xen-devel@xxxxxxxxxxxxx
> Cc: jbeulich@xxxxxxxx; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>;
> Wei Liu <wei.liu2@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Roger
> Pau Monne <roger.pau@xxxxxxxxxx>
> Subject: Re: [PATCH 07/10] pvh/ioreq: Install handlers for ACPI-related PVH
> IO accesses
> 
> On 11/07/2016 04:39 AM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Boris Ostrovsky [mailto:boris.ostrovsky@xxxxxxxxxx]
> >> Sent: 06 November 2016 21:43
> >> To: xen-devel@xxxxxxxxxxxxx
> >> Cc: jbeulich@xxxxxxxx; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>;
> >> Wei Liu <wei.liu2@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>;
> Roger
> >> Pau Monne <roger.pau@xxxxxxxxxx>; Boris Ostrovsky
> >> <boris.ostrovsky@xxxxxxxxxx>; Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> >> Subject: [PATCH 07/10] pvh/ioreq: Install handlers for ACPI-related PVH
> IO
> >> accesses
> >>
> >> No IOREQ server installed for an HVM guest (as indicated
> >> by HVM_PARAM_NR_IOREQ_SERVER_PAGES being set to zero) implies
> >> a PVH guest. These guests need to handle ACPI-related IO
> >> accesses.
> >>
> >> Logic for the handler will be provided by a later patch.
> >>
> >> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> >> ---
> >> CC: Paul Durrant <paul.durrant@xxxxxxxxxx>
> >> ---
> >>  tools/libxc/xc_dom_x86.c        |  3 +++
> >>  xen/arch/x86/hvm/hvm.c          | 13 +++++++++----
> >>  xen/arch/x86/hvm/ioreq.c        | 17 +++++++++++++++++
> >>  xen/include/asm-x86/hvm/ioreq.h |  1 +
> >>  4 files changed, 30 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
> >> index 7fcdee1..0017694 100644
> >> --- a/tools/libxc/xc_dom_x86.c
> >> +++ b/tools/libxc/xc_dom_x86.c
> >> @@ -649,6 +649,9 @@ static int alloc_magic_pages_hvm(struct
> >> xc_dom_image *dom)
> >>          /* Limited to one module. */
> >>          if ( dom->ramdisk_blob )
> >>              start_info_size += sizeof(struct hvm_modlist_entry);
> >> +
> >> +        /* No IOREQ server for PVH guests. */
> >> +        xc_hvm_param_set(xch, domid,
> >> HVM_PARAM_NR_IOREQ_SERVER_PAGES, 0);
> > I thought params defaulted to zero...
> >
> >>      }
> >>      else
> >>      {
> >> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> >> index 704fd64..6f8439d 100644
> >> --- a/xen/arch/x86/hvm/hvm.c
> >> +++ b/xen/arch/x86/hvm/hvm.c
> >> @@ -5206,14 +5206,19 @@ static int hvmop_set_param(
> >>      {
> >>          unsigned int i;
> >>
> >> -        if ( a.value == 0 ||
> >> -             a.value > sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8 )
> > ...and if they do then you should not need this change.
> >
> >> +        if ( a.value > sizeof(d->arch.hvm_domain.ioreq_gmfn.mask) * 8 )
> >>          {
> >>              rc = -EINVAL;
> >>              break;
> >>          }
> >> -        for ( i = 0; i < a.value; i++ )
> >> -            set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
> >> +
> >> +        if ( a.value == 0 ) /* PVH guest */
> >> +            acpi_ioreq_init(d);
> >> +        else
> >> +        {
> >> +            for ( i = 0; i < a.value; i++ )
> >> +                set_bit(i, &d->arch.hvm_domain.ioreq_gmfn.mask);
> >> +        }
> > This looks quite wrong. Initializing the acpi io hander should be done
> directly in hvm_domain_initialise() IMO and not as an obscure side effect of
> setting a parameter to its default value.
> 
> Right, this call indeed is used to tell the hypervisor that it should
> handle IO accesses itself. It was either this or adding another
> emulation flag (which is what Andrew prefers).

Yes, another emulation flag seems like the right thing.

  Paul

> 
> -boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.