[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/8] ioreq-server: create basic ioreq server abstraction.



> -----Original Message-----
> From: George Dunlap [mailto:george.dunlap@xxxxxxxxxxxxx]
> Sent: 03 April 2014 16:49
> To: Paul Durrant; George Dunlap
> Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org); Jan Beulich
> Subject: Re: [Xen-devel] [PATCH v4 3/8] ioreq-server: create basic ioreq
> server abstraction.
> 
> On 04/03/2014 04:43 PM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> >> George Dunlap
> >> Sent: 03 April 2014 15:50
> >> To: Paul Durrant
> >> Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org); Jan Beulich
> >> Subject: Re: [Xen-devel] [PATCH v4 3/8] ioreq-server: create basic ioreq
> >> server abstraction.
> >>
> >> On Wed, Apr 2, 2014 at 4:11 PM, Paul Durrant <paul.durrant@xxxxxxxxxx>
> >> wrote:
> >>> Collect together data structures concerning device emulation together
> into
> >>> a new struct hvm_ioreq_server.
> >>>
> >>> Code that deals with the shared and buffered ioreq pages is extracted
> from
> >>> functions such as hvm_domain_initialise, hvm_vcpu_initialise and
> >> do_hvm_op
> >>> and consolidated into a set of hvm_ioreq_server manipulation functions.
> >> The
> >>> lock in the hvm_ioreq_page served two different purposes and has
> been
> >>> replaced by separate locks in the hvm_ioreq_server.
> >>>
> >>> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
> >>> Cc: Keir Fraser <keir@xxxxxxx>
> >>> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> >>> ---
> >>>   xen/arch/x86/hvm/hvm.c           |  406
> ++++++++++++++++++++++++++----
> >> --------
> >>>   xen/include/asm-x86/hvm/domain.h |   35 +++-
> >>>   xen/include/asm-x86/hvm/vcpu.h   |   12 +-
> >>>   3 files changed, 322 insertions(+), 131 deletions(-)
> >>>
> >>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> >>> index 573f845..5f131c4 100644
> >>> --- a/xen/arch/x86/hvm/hvm.c
> >>> +++ b/xen/arch/x86/hvm/hvm.c
> >>> @@ -352,39 +352,49 @@ void hvm_migrate_pirqs(struct vcpu *v)
> >>>       spin_unlock(&d->event_lock);
> >>>   }
> >>>
> >>> -static ioreq_t *get_ioreq(struct vcpu *v)
> >>> +static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v)
> >>>   {
> >>> -    struct domain *d = v->domain;
> >>> -    shared_iopage_t *p = d->arch.hvm_domain.ioreq.va;
> >>> +    shared_iopage_t *p = s->ioreq.va;
> >>>
> >>> -    ASSERT((v == current) || spin_is_locked(&d-
> >>> arch.hvm_domain.ioreq.lock));
> >>> +    /*
> >>> +     * Manipulation of the shared ioreq structure (to update the event
> >>> +     * channel) is protected by a domain_pause(). So this function should
> >>> +     * only ever be executed for the current vcpu or one that is paused.
> >>> +     */
> >> What on earth is "manipulation of the shared ioreq structure is
> >> protected by domain_pause()" supposed to mean?  Do you mean that
> the
> >> only time there may be a race is between something in the emulation
> >> code writing to it, and something in the resume path reading it?  That
> >> there are never any other races to access the structure?  And that
> >> since the resume path won't be taken when the domain is paused, there
> >> can be no races, and therefore we do not need to be locked?
> >>
> > The sentiment I'm trying to express is that the shared structure can never
> be in use in the emulation path whilst it is being modified as the code that
> modifies always pauses the domain before doing so, so the assertion is that
> either v == current (in which case the domain is clearly not paused and we're
> in the emulation path) or !vcpu_runnable(v) (in which case the domain is
> paused and we're making a change).
> 
> Sure, but is there a risk of two different invocations of the "code that
> modifies" happening at the same time?  (Perhaps, for instance, because
> of a buggy toolstack that makes two calls on the same ioreq server?)
> 

No, hvm_ioreq_server->lock prevents that. (See comment in structure definition).

  Paul

>   -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.