[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V3 09/10] xen/arm: io: Use binary search for mmio handler lookup
Hi Shanker, On 27/06/16 21:33, Shanker Donthineni wrote: > As the number of I/O handlers increase, the overhead associated with > linear lookup also increases. The system might have maximum of 144 > (assuming CONFIG_NR_CPUS=128) mmio handlers. In worst case scenario, > it would require 144 iterations for finding a matching handler. Now > it is time for us to change from linear (complexity O(n)) to a binary > search (complexity O(log n) for reducing mmio handler lookup overhead. However, you will add contention because the code is using a spinlock. I am planning to send the following patch as a prerequisite of this series to switch from spinlock to read-write lock: commit b69e975ce25b2c94f7205b0b8329f351327fbcf7 Author: Julien Grall <julien.grall@xxxxxxx> Date: Tue Jun 28 11:04:11 2016 +0100 xen/arm: io: Protect the handlers with a read-write lock Currently, accessing the I/O handlers does not require to take a lock because new handlers are always added at the end of the array. In a follow-up patch, this array will be sort to optimize the look up. Given that most of the time the I/O handlers will not be modify, using a spinlock will add contention when multiple vCPU are accessing the emulated MMIOs. So use a read-write lock to protected the handlers. Finally, take the opportunity to re-indent correctly domain_io_init. Signed-off-by: Julien Grall <julien.grall@xxxxxxx> diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c index 0156755..5a96836 100644 --- a/xen/arch/arm/io.c +++ b/xen/arch/arm/io.c @@ -70,23 +70,39 @@ static int handle_write(const struct mmio_handler *handler, struct vcpu *v, handler->priv); } -int handle_mmio(mmio_info_t *info) +static const struct mmio_handler *find_mmio_handler(struct domain *d, + paddr_t gpa) { - struct vcpu *v = current; - int i; - const struct mmio_handler *handler = NULL; - const struct vmmio *vmmio = &v->domain->arch.vmmio; + const struct mmio_handler *handler; + unsigned int i; + struct vmmio *vmmio = &d->arch.vmmio; + + read_lock(&vmmio->lock); for ( i = 0; i < vmmio->num_entries; i++ ) { handler = &vmmio->handlers[i]; - if ( (info->gpa >= handler->addr) && - (info->gpa < (handler->addr + handler->size)) ) + if ( (gpa >= handler->addr) && + (gpa < (handler->addr + handler->size)) ) break; } if ( i == vmmio->num_entries ) + handler = NULL; + + read_unlock(&vmmio->lock); + + return handler; +} + +int handle_mmio(mmio_info_t *info) +{ + struct vcpu *v = current; + const struct mmio_handler *handler = NULL; + + handler = find_mmio_handler(v->domain, info->gpa); + if ( !handler ) return 0; if ( info->dabt.write ) @@ -104,7 +120,7 @@ void register_mmio_handler(struct domain *d, BUG_ON(vmmio->num_entries >= MAX_IO_HANDLER); - spin_lock(&vmmio->lock); + write_lock(&vmmio->lock); handler = &vmmio->handlers[vmmio->num_entries]; @@ -113,24 +129,17 @@ void register_mmio_handler(struct domain *d, handler->size = size; handler->priv = priv; - /* - * handle_mmio is not using the lock to avoid contention. - * Make sure the other processors see the new handler before - * updating the number of entries - */ - dsb(ish); - vmmio->num_entries++; - spin_unlock(&vmmio->lock); + write_unlock(&vmmio->lock); } int domain_io_init(struct domain *d) { - spin_lock_init(&d->arch.vmmio.lock); - d->arch.vmmio.num_entries = 0; + rwlock_init(&d->arch.vmmio.lock); + d->arch.vmmio.num_entries = 0; - return 0; + return 0; } /* diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h index da1cc2e..32f10f2 100644 --- a/xen/include/asm-arm/mmio.h +++ b/xen/include/asm-arm/mmio.h @@ -20,6 +20,7 @@ #define __ASM_ARM_MMIO_H__ #include <xen/lib.h> +#include <xen/rwlock.h> #include <asm/processor.h> #include <asm/regs.h> @@ -51,7 +52,7 @@ struct mmio_handler { struct vmmio { int num_entries; - spinlock_t lock; + rwlock_t lock; struct mmio_handler handlers[MAX_IO_HANDLER]; }; -- Julien Grall _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |