|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 11/11] ioreq: provide support for long-running operations...
On Tue, Sep 10, 2019 at 04:14:18PM +0200, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monne <roger.pau@xxxxxxxxxx>
> > Sent: 03 September 2019 17:14
> > To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> > Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>; Paul Durrant
> > <Paul.Durrant@xxxxxxxxxx>; Jan Beulich
> > <jbeulich@xxxxxxxx>; Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu
> > <wl@xxxxxxx>; George Dunlap
> > <George.Dunlap@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Julien
> > Grall <julien.grall@xxxxxxx>;
> > Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>; Stefano Stabellini
> > <sstabellini@xxxxxxxxxx>; Tim
> > (Xen.org) <tim@xxxxxxx>
> > Subject: [PATCH v2 11/11] ioreq: provide support for long-running
> > operations...
> >
> > ...and switch vPCI to use this infrastructure for long running
> > physmap modification operations.
> >
> > This allows to get rid of the vPCI specific modifications done to
> > handle_hvm_io_completion and allows generalizing the support for
> > long-running operations to other internal ioreq servers. Such support
> > is implemented as a specific handler that can be registers by internal
> > ioreq servers and that will be called to check for pending work.
> > Returning true from this handler will prevent the vcpu from running
> > until the handler returns false.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> > ---
> > xen/arch/x86/hvm/ioreq.c | 55 +++++++++++++++++++++++++-----
> > xen/drivers/vpci/header.c | 61 ++++++++++++++++++----------------
> > xen/drivers/vpci/vpci.c | 8 ++++-
> > xen/include/asm-x86/hvm/vcpu.h | 3 +-
> > xen/include/xen/vpci.h | 6 ----
> > 5 files changed, 89 insertions(+), 44 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> > index 33c56b880c..caa53dfa84 100644
> > --- a/xen/arch/x86/hvm/ioreq.c
> > +++ b/xen/arch/x86/hvm/ioreq.c
> > @@ -239,16 +239,48 @@ bool handle_hvm_io_completion(struct vcpu *v)
> > enum hvm_io_completion io_completion;
> > unsigned int id;
> >
> > - if ( has_vpci(d) && vpci_process_pending(v) )
> > - {
> > - raise_softirq(SCHEDULE_SOFTIRQ);
> > - return false;
> > - }
> > -
> > - FOR_EACH_EXTERNAL_IOREQ_SERVER(d, id, s)
> > + FOR_EACH_IOREQ_SERVER(d, id, s)
> > {
> > struct hvm_ioreq_vcpu *sv;
> >
> > + if ( hvm_ioreq_is_internal(id) )
> > + {
>
> I wonder whether it would be neater by saying:
>
> if ( !hvm_ioreq_is_internal(id) )
> continue;
>
> here to avoid the indentation below.
I'm not sure I follow. This loop does work for both the internal and
the external ioreq servers, and hence skipping external servers would
prevent the iteration over ioreq_vcpu_list done below for external
servers.
>
> > + if ( vio->io_req.state == STATE_IOREQ_INPROCESS )
> > + {
> > + ioreq_t req = vio->io_req;
> > +
> > + /*
> > + * Check and convert the PIO/MMIO ioreq to a PCI config
> > space
> > + * access.
> > + */
> > + convert_pci_ioreq(d, &req);
> > +
> > + if ( s->handler(v, &req, s->data) == X86EMUL_RETRY )
> > + {
> > + /*
> > + * Need to raise a scheduler irq in order to prevent
> > the
> > + * guest vcpu from resuming execution.
> > + *
> > + * Note this is not required for external ioreq
> > operations
> > + * because in that case the vcpu is marked as blocked,
> > but
> > + * this cannot be done for long-running internal
> > + * operations, since it would prevent the vcpu from
> > being
> > + * scheduled and thus the long running operation from
> > + * finishing.
> > + */
> > + raise_softirq(SCHEDULE_SOFTIRQ);
> > + return false;
> > + }
> > +
> > + /* Finished processing the ioreq. */
> > + if ( hvm_ioreq_needs_completion(&vio->io_req) )
> > + vio->io_req.state = STATE_IORESP_READY;
> > + else
> > + vio->io_req.state = STATE_IOREQ_NONE;
>
> IMO the above is also neater as:
>
> vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ?
> STATE_IORESP_READY : STATE_IOREQ_NONE;
>
> > + }
> > + continue;
> > + }
> > +
> > list_for_each_entry ( sv,
> > &s->ioreq_vcpu_list,
> > list_entry )
Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |