[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 08/12] x86/hvm/ioreq: maintain an array of ioreq servers rather than a list
> -----Original Message----- > From: Jan Beulich [mailto:JBeulich@xxxxxxxx] > Sent: 25 September 2017 16:17 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; xen- > devel@xxxxxxxxxxxxxxxxxxxx > Subject: Re: [PATCH v7 08/12] x86/hvm/ioreq: maintain an array of ioreq > servers rather than a list > > >>> On 18.09.17 at 17:31, <paul.durrant@xxxxxxxxxx> wrote: > > --- a/xen/arch/x86/hvm/ioreq.c > > +++ b/xen/arch/x86/hvm/ioreq.c > > @@ -33,6 +33,32 @@ > > > > #include <public/hvm/ioreq.h> > > > > +static void set_ioreq_server(struct domain *d, unsigned int id, > > + struct hvm_ioreq_server *s) > > +{ > > + ASSERT(id < MAX_NR_IOREQ_SERVERS); > > + ASSERT(!s || !d->arch.hvm_domain.ioreq_server.server[id]); > > + > > + d->arch.hvm_domain.ioreq_server.server[id] = s; > > +} > > + > > +static struct hvm_ioreq_server *get_ioreq_server(struct domain *d, > > const (for the parameter)? > Ok. > > + unsigned int id) > > +{ > > + if ( id >= MAX_NR_IOREQ_SERVERS ) > > + return NULL; > > + > > + return d->arch.hvm_domain.ioreq_server.server[id]; > > +} > > + > > +#define IS_DEFAULT(s) \ > > + ((s) == get_ioreq_server((s)->domain, DEFAULT_IOSERVID)) > > Is it really useful to go through get_ioreq_server() here? > No, I'll change to direct array dereference. > > +#define FOR_EACH_IOREQ_SERVER(d, id, s) \ > > + for ( (id) = 0, (s) = get_ioreq_server((d), (id)); \ > > + (id) < MAX_NR_IOREQ_SERVERS; \ > > + (s) = get_ioreq_server((d), ++(id)) ) > > There are three instances of stray pairs of parentheses here, each > time when a macro parameter gets passed unaltered to > get_ioreq_server(). OK. > > > @@ -301,8 +333,9 @@ static void hvm_update_ioreq_evtchn(struct > hvm_ioreq_server *s, > > } > > } > > > > + > > static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s, > > Stray addition of a blank line? > Yep. I'll get rid of that. > > @@ -501,19 +531,19 @@ static void > hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s, > > } > > > > static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s, > > - bool is_default) > > + ioservid_t id) > > { > > unsigned int i; > > int rc; > > > > - if ( is_default ) > > + if ( IS_DEFAULT(s) ) > > goto done; > > Wouldn't comparing the ID be even cheaper in cases like this one? > And perhaps assert that ID and server actually match? Ok. If id is available I'll use that. > > > @@ -741,35 +754,34 @@ int hvm_destroy_ioreq_server(struct domain *d, > ioservid_t id) > > > > spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); > > > > - rc = -ENOENT; > > - list_for_each_entry ( s, > > - &d->arch.hvm_domain.ioreq_server.list, > > - list_entry ) > > - { > > - if ( s == d->arch.hvm_domain.default_ioreq_server ) > > - continue; > > + s = get_ioreq_server(d, id); > > > > - if ( s->id != id ) > > - continue; > > + rc = -ENOENT; > > + if ( !s ) > > + goto out; > > > > - domain_pause(d); > > + rc = -EPERM; > > + if ( IS_DEFAULT(s) ) > > + goto out; > > Here I think it is particularly strange to not use the ID in the check; > this could even be done without holding the lock. Same in other > functions below. > > > @@ -785,29 +797,27 @@ int hvm_get_ioreq_server_info(struct domain > *d, ioservid_t id, > > > > spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); > > > > - rc = -ENOENT; > > - list_for_each_entry ( s, > > - &d->arch.hvm_domain.ioreq_server.list, > > - list_entry ) > > - { > > - if ( s == d->arch.hvm_domain.default_ioreq_server ) > > - continue; > > + s = get_ioreq_server(d, id); > > > > - if ( s->id != id ) > > - continue; > > + rc = -ENOENT; > > + if ( !s ) > > + goto out; > > > > - *ioreq_gfn = s->ioreq.gfn; > > + rc = -EOPNOTSUPP; > > + if ( IS_DEFAULT(s) ) > > + goto out; > > Why EOPNOTSUPP when it was just the same ENOENT as no > server at all before (same further down)? > This was because of comments from Roger. In some cases I think a return of EOPNOTSUPP is more appropriate. Passing the default id is a distinct failure case. > > void hvm_destroy_all_ioreq_servers(struct domain *d) > > { > > - struct hvm_ioreq_server *s, *next; > > + struct hvm_ioreq_server *s; > > + unsigned int id; > > > > spin_lock_recursive(&d->arch.hvm_domain.ioreq_server.lock); > > > > /* No need to domain_pause() as the domain is being torn down */ > > > > - list_for_each_entry_safe ( s, > > - next, > > - &d->arch.hvm_domain.ioreq_server.list, > > - list_entry ) > > + FOR_EACH_IOREQ_SERVER(d, id, s) > > { > > - bool is_default = (s == d->arch.hvm_domain.default_ioreq_server); > > - > > - hvm_ioreq_server_disable(s, is_default); > > - > > - if ( is_default ) > > - d->arch.hvm_domain.default_ioreq_server = NULL; > > + if ( !s ) > > + continue; > > > > - list_del(&s->list_entry); > > + hvm_ioreq_server_disable(s); > > + hvm_ioreq_server_deinit(s); > > > > - hvm_ioreq_server_deinit(s, is_default); > > + ASSERT(d->arch.hvm_domain.ioreq_server.count); > > + --d->arch.hvm_domain.ioreq_server.count; > > Seeing this - do you actually need the count as a separate field? > I.e. are there performance critical uses of it, where going through > the array would be too expensive? Most of the uses are just > ASSERT()s anyway. The specific case is in hvm_select_ioreq_server(). If there was no count then the array would have to be searched for the initial test. > > > @@ -1111,7 +1111,7 @@ int hvm_set_dm_domain(struct domain *d, > domid_t domid) > > * still be set and thus, when the server is created, it will have > > * the correct domid. > > */ > > - s = d->arch.hvm_domain.default_ioreq_server; > > + s = get_ioreq_server(d, DEFAULT_IOSERVID); > > Similar to above, is it really useful to go through get_ioreq_server() > here (and in other similar cases)? Perhaps I should re-introduce GET_IOREQ_SERVER() for array lookup without the bounds check and use that instead of the inline func in places such as this. Paul > > Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |