[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2 8/9] x86/HVM: drop stdvga's "lock" struct member
On 11.09.2024 14:42, Andrew Cooper wrote: > On 11/09/2024 1:29 pm, Jan Beulich wrote: >> No state is left to protect. It being the last field, drop the struct >> itself as well. Similarly for then ending up empty, drop the .complete >> handler. >> >> Suggested-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > > Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> with one change. Thanks. >> --- a/xen/arch/x86/hvm/stdvga.c >> +++ b/xen/arch/x86/hvm/stdvga.c >> @@ -69,8 +69,6 @@ static int cf_check stdvga_mem_write( >> static bool cf_check stdvga_mem_accept( >> const struct hvm_io_handler *handler, const ioreq_t *p) >> { >> - struct hvm_hw_stdvga *s = ¤t->domain->arch.hvm.stdvga; >> - >> /* >> * The range check must be done without taking the lock, to avoid >> * deadlock when hvm_mmio_internal() is called from >> @@ -80,50 +78,31 @@ static bool cf_check stdvga_mem_accept( >> (ioreq_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) ) >> return 0; > > This wants adjusting too. At a minimum the comment about deadlock needs > dropping, and a straight delete is fine. Oh, of course. I meant to but then forgot. > However for performance, we also want to do the dir/ptr/count exclusions > before the address range exclusions, meaning that ... > >> >> - spin_lock(&s->lock); >> - >> if ( p->dir != IOREQ_WRITE || p->data_is_ptr || p->count != 1 ) >> { >> /* >> * Only accept single direct writes, as that's the only thing we can >> * accelerate using buffered ioreq handling. >> */ > > ... it wants merging with this into a single expression. I'm not convinced, and hence would at least want to keep this separate. Which exact order checks want doing in would require more detailed analysis imo. Or do you have blindingly obvious reasons to believe that the re-ordering you suggest is always going to be an improvement? Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |