[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH V4 17/24] xen/ioreq: Introduce domain_has_ioreq_server()
On Tue, 12 Jan 2021, Oleksandr Tyshchenko wrote: > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> > > This patch introduces a helper the main purpose of which is to check > if a domain is using IOREQ server(s). > > On Arm the current benefit is to avoid calling vcpu_ioreq_handle_completion() > (which implies iterating over all possible IOREQ servers anyway) > on every return in leave_hypervisor_to_guest() if there is no active > servers for the particular domain. > Also this helper will be used by one of the subsequent patches on Arm. > > Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> > CC: Julien Grall <julien.grall@xxxxxxx> > [On Arm only] > Tested-by: Wei Chen <Wei.Chen@xxxxxxx> Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> > --- > Please note, this is a split/cleanup/hardening of Julien's PoC: > "Add support for Guest IO forwarding to a device emulator" > > Changes RFC -> V1: > - new patch > > Changes V1 -> V2: > - update patch description > - guard helper with CONFIG_IOREQ_SERVER > - remove "hvm" prefix > - modify helper to just return d->arch.hvm.ioreq_server.nr_servers > - put suitable ASSERT()s > - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server() > - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init() > > Changes V2 -> V3: > - update patch description > - remove ASSERT()s from the helper, add a comment > - use #ifdef CONFIG_IOREQ_SERVER inside function body > - use new ASSERT() construction in set_ioreq_server() > > Changes V3 -> V4: > - update patch description > - drop per-domain variable "nr_servers" > - reimplement a helper to count the non-NULL entries > - make the helper out-of-line > --- > xen/arch/arm/traps.c | 15 +++++++++------ > xen/common/ioreq.c | 16 ++++++++++++++++ > xen/include/xen/ioreq.h | 2 ++ > 3 files changed, 27 insertions(+), 6 deletions(-) > > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c > index 4a83e1e..35094d8 100644 > --- a/xen/arch/arm/traps.c > +++ b/xen/arch/arm/traps.c > @@ -2262,14 +2262,17 @@ static bool check_for_vcpu_work(void) > struct vcpu *v = current; > > #ifdef CONFIG_IOREQ_SERVER > - bool handled; > + if ( domain_has_ioreq_server(v->domain) ) > + { > + bool handled; > > - local_irq_enable(); > - handled = vcpu_ioreq_handle_completion(v); > - local_irq_disable(); > + local_irq_enable(); > + handled = vcpu_ioreq_handle_completion(v); > + local_irq_disable(); > > - if ( !handled ) > - return true; > + if ( !handled ) > + return true; > + } > #endif > > if ( likely(!v->arch.need_flush_to_ram) ) > diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c > index d5f4dd3..59f4990 100644 > --- a/xen/common/ioreq.c > +++ b/xen/common/ioreq.c > @@ -80,6 +80,22 @@ static ioreq_t *get_ioreq(struct ioreq_server *s, struct > vcpu *v) > return &p->vcpu_ioreq[v->vcpu_id]; > } > > +/* > + * This should only be used when d == current->domain or when they're > + * distinct and d is paused. Otherwise the result is stale before > + * the caller can inspect it. > + */ > +bool domain_has_ioreq_server(const struct domain *d) > +{ > + const struct ioreq_server *s; > + unsigned int id; > + > + FOR_EACH_IOREQ_SERVER(d, id, s) > + return true; > + > + return false; > +} > + > static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v, > struct ioreq_server **srvp) > { > diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h > index ec7e98d..f0908af 100644 > --- a/xen/include/xen/ioreq.h > +++ b/xen/include/xen/ioreq.h > @@ -81,6 +81,8 @@ static inline bool ioreq_needs_completion(const ioreq_t > *ioreq) > #define HANDLE_BUFIOREQ(s) \ > ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) > > +bool domain_has_ioreq_server(const struct domain *d); > + > bool vcpu_ioreq_pending(struct vcpu *v); > bool vcpu_ioreq_handle_completion(struct vcpu *v); > bool is_ioreq_server_page(struct domain *d, const struct page_info *page); > -- > 2.7.4 >
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |