[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use HVM_PARAM_[BUF]IOREQ_PFN



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 08 October 2018 14:29
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: Re: [PATCH 2/2] x86/hvm/ioreq: allow ioreq servers to use
> HVM_PARAM_[BUF]IOREQ_PFN
> 
> >>> On 05.10.18 at 15:43, <paul.durrant@xxxxxxxxxx> wrote:
> > Since commit 2c257bd6 "x86/hvm: remove default ioreq server (again)" the
> > GFNs allocated by the toolstack and set in HVM_PARAM_IOREQ_PFN and
> > HVM_PARAM_BUFIOREQ_PFN have been unused. This patch allows them to be
> used
> > by (non-default) ioreq servers.
> >
> > NOTE: This fixes a compatibility issue. A guest created on a version of
> >       Xen that pre-dates the initial ioreq server implementation and
> then
> >       migrated in will currently fail to resume because its migration
> >       stream will lack values for HVM_PARAM_IOREQ_SERVER_PFN and
> >       HVM_PARAM_NR_IOREQ_SERVER_PAGES *unless* the system has an
> >       emulator domain that uses direct resource mapping (which depends
> >       on the version of privcmd it happens to have) in which case it
> >       will not require use of GFNs for the ioreq server shared
> >       pages.
> 
> Meaning this wants to be backported till where?
> 

This fix is 4.12 specific because it is predicated on removal of default ioreq 
server support.

> > A similar compatibility issue with migrated-in VMs exists with Xen 4.11
> > because the upstream QEMU fall-back to use legacy ioreq server was
> broken
> > when direct resource mapping was introduced.
> > This is because, prior to the resource mapping patches, it was the
> creation
> > of the non-default ioreq server that failed if GFNs were not available
> > whereas, as of 4.11, it is retrieval of the info that fails which does
> not
> > trigger the fall-back.
> 
> Are you working on a fix or workaround for this, too, then?
> 

Not yet. I'm not sure how to approach this. There are a few options:

1. Backport default IOREQ server removal and this fix
2. Do a bespoke 4.11 fix that forces IOREQ server creation to fail if there are 
no GFNs available, thus triggering the default IOREQ server fallback in QEMU.
3. Upstream a fix into QEMU to do a fallback at the point that it fails to get 
GFNs i.e. have it close its IOREQ server and then fall back to default.

The problem with 1 is that it breaks qemu trad. 2 is probably simplest, but if 
the emualator can do resource mapping it is unnecessary. 3 is probably best, 
but it's not our fix to deliver.

Thoughts?

 Paul


> > --- a/xen/arch/x86/hvm/ioreq.c
> > +++ b/xen/arch/x86/hvm/ioreq.c
> > @@ -237,6 +237,26 @@ bool handle_hvm_io_completion(struct vcpu *v)
> >      return true;
> >  }
> >
> > +static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
> > +{
> > +    struct domain *d = s->target;
> > +    unsigned int i;
> > +
> > +    BUILD_BUG_ON(HVM_PARAM_IOREQ_PFN >
> > +                 sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
> > +    BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN >
> > +                 sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
> > +    BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN != HVM_PARAM_IOREQ_PFN + 1);
> > +
> > +    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
> > +    {
> > +        if ( !test_and_set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) )
> > +            return _gfn(d->arch.hvm.params[i]);
> 
> Aren't you risking to use GFN 0 here, if the param was never set?
> Since in theory GFN 0 could be valid here, perhaps whether these
> MFNs may be used should be tracked by starting out legacy_mask
> such that allocations are impossible, while the setting of the
> params would then make the slots available for allocating from?
> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.