[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] Refactor ioreq server for better performance
> -----Original Message----- > From: Yu, Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx] > Sent: 01 July 2015 10:02 > To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx; Andrew Cooper; > JBeulich@xxxxxxxx; Kevin Tian; zhiyuan.lv@xxxxxxxxx > Subject: Re: [Xen-devel] [PATCH] Refactor ioreq server for better > performance > > > > On 6/30/2015 5:59 PM, Paul Durrant wrote: > >> -----Original Message----- > >> From: Yu, Zhang [mailto:yu.c.zhang@xxxxxxxxxxxxxxx] > >> Sent: 30 June 2015 08:11 > >> To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx; Andrew Cooper; > >> JBeulich@xxxxxxxx; Kevin Tian; zhiyuan.lv@xxxxxxxxx > >> Subject: Re: [Xen-devel] [PATCH] Refactor ioreq server for better > >> performance > >> > >> Thanks you, Paul. > >> > > > > No problem :-) > > > > [snip] > >>> > >>> I think assigning a name to the rangeset and having a debug-key dump is > >> useful. Can you not duplicate that in your new implementation? > >>> > >> > >> Well, I can add some dump routines, e.g. > hvm_ioreq_server_dump_range(), > >> to dump the ranges inside each ioreq server of a domain. This routine > >> is similar to the rangeset_domain_printk(). But unlike the rangeset, > >> which is also inserted in domain->rangesets list, the new rb_rangeset > >> is only a member of ioreq server. So we can dump the ranges inside a > >> domain by first access each ioreq server. > >> Do you think this approach duplicated? > >> > > > > Either add an rb_rangesets list to the domain, or have the debug key walk > the ioreq server list and dump the rangesets in each. The former is obviously > the simplest. > > > > Thanks, Paul. > Well, I agree the former approach would be simpler. But I still doubt > if this is more reasonable. :) > IIUC, one of the reasons for struct domain to have a rangeset list(and > a spinlock - rangesets_lock), is because there are iomem_caps and > irq_caps for each domain. These 2 rangeset members of struct domain are > platform independent. > However, struct rb_rangeset is only supposed to be used in ioreq > server, which is only for x86 hvm cases. Adding a rb_rangeset list > member(similarly, if so, a rb_rangesets_lock is also required) in > struct domain maybe useless for hardware domain and for platforms other > than x86. Fair enough. > So, I'd like to register a new debug key, to dump the ioreq server > informations, just like the keys to dump iommu p2m table or the irq > mappings. With a new debug key, we do not need to add a spinlock for > rb_rangeset in struct domain, the one in ioreq server would be enough. > Does this sound reasonable? > That would be ok with me, but I'm not sure about claiming a whole debug key for this. Is there any other one that you could piggy-back on? If not, then maybe just make it part of the 'q' output. Paul > > [snip] > >>> > >>> I this limit enough? I think having something that's toolstack-tunable > would > >> be more future-proof. > >>> > >> > >> By now, this value would suffice. I'd prefer this as a default value. > >> As you know, I have also considered a xl param to do so. And one > >> question is that, would a per-domain param appropriate? I mean, each > >> hvm can have multiple ioreq servers. If this is acceptible, I can cook > >> another patch to do so, is this OK? > >> > > > > That's ok with me, but you may need to mention the idea of a follow up > patch in the check-in comment. > > > Sure, and thanks. :) > > Yu > > Paul > > > >> Thanks > >> Yu > >> > >>> Paul > >>> > >>>> > >>>> struct hvm_ioreq_server { > >>>> struct list_head list_entry; > >>>> @@ -68,7 +68,7 @@ struct hvm_ioreq_server { > >>>> /* Lock to serialize access to buffered ioreq ring */ > >>>> spinlock_t bufioreq_lock; > >>>> evtchn_port_t bufioreq_evtchn; > >>>> - struct rangeset *range[NR_IO_RANGE_TYPES]; > >>>> + struct rb_rangeset *range[NR_IO_RANGE_TYPES]; > >>>> bool_t enabled; > >>>> bool_t bufioreq_atomic; > >>>> }; > >>>> diff --git a/xen/include/xen/rb_rangeset.h > >>>> b/xen/include/xen/rb_rangeset.h > >>>> new file mode 100644 > >>>> index 0000000..768230c > >>>> --- /dev/null > >>>> +++ b/xen/include/xen/rb_rangeset.h > >>>> @@ -0,0 +1,45 @@ > >>>> +/* > >>>> + Red-black tree based rangeset > >>>> + > >>>> + This program is free software; you can redistribute it and/or modify > >>>> + it under the terms of the GNU General Public License as published by > >>>> + the Free Software Foundation; either version 2 of the License, or > >>>> + (at your option) any later version. > >>>> + > >>>> + This program is distributed in the hope that it will be useful, > >>>> + but WITHOUT ANY WARRANTY; without even the implied warranty > of > >>>> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See > the > >>>> + GNU General Public License for more details. > >>>> + > >>>> + You should have received a copy of the GNU General Public License > >>>> + along with this program; if not, write to the Free Software > >>>> + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 > >> USA > >>>> +*/ > >>>> + > >>>> +#ifndef __RB_RANGESET_H__ > >>>> +#define __RB_RANGESET_H__ > >>>> + > >>>> +#include <xen/rbtree.h> > >>>> + > >>>> +struct rb_rangeset { > >>>> + long nr_ranges; > >>>> + struct rb_root rbroot; > >>>> +}; > >>>> + > >>>> +struct rb_range { > >>>> + struct rb_node node; > >>>> + unsigned long s, e; > >>>> +}; > >>>> + > >>>> +struct rb_rangeset *rb_rangeset_new(void); > >>>> +void rb_rangeset_destroy(struct rb_rangeset *r); > >>>> +bool_t rb_rangeset_overlaps_range(struct rb_rangeset *r, > >>>> + unsigned long s, unsigned long e); > >>>> +bool_t rb_rangeset_contains_range( > >>>> + struct rb_rangeset *r, unsigned long s, unsigned long e); > >>>> +int rb_rangeset_add_range(struct rb_rangeset *r, > >>>> + unsigned long s, unsigned long e); > >>>> +int rb_rangeset_remove_range(struct rb_rangeset *r, > >>>> + unsigned long s, unsigned long e); > >>>> + > >>>> +#endif /* __RB_RANGESET_H__ */ > >>>> -- > >>>> 1.9.1 > >>> > >>> > >>> _______________________________________________ > >>> Xen-devel mailing list > >>> Xen-devel@xxxxxxxxxxxxx > >>> http://lists.xen.org/xen-devel > >>> > >>> > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@xxxxxxxxxxxxx > > http://lists.xen.org/xen-devel > > > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |