[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/2] ioreq-server: write protected range and forwarding




> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
> bounces@xxxxxxxxxxxxx] On Behalf Of Jan Beulich
> Sent: Wednesday, September 3, 2014 9:17 PM
> To: Ye, Wei
> Cc: Tian, Kevin; keir@xxxxxxx; ian.campbell@xxxxxxxxxx;
> stefano.stabellini@xxxxxxxxxxxxx; tim@xxxxxxx; ian.jackson@xxxxxxxxxxxxx;
> Dugger, Donald D; xen-devel@xxxxxxxxxxxxx; Paul.Durrant@xxxxxxxxxx; Lv,
> Zhiyuan; Zhang, Yang Z
> Subject: Re: [Xen-devel] [PATCH v3 2/2] ioreq-server: write protected range
> and forwarding
> 
> >>> On 03.09.14 at 23:53, <wei.ye@xxxxxxxxx> wrote:
> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -48,7 +48,7 @@ struct hvm_ioreq_vcpu {
> >      evtchn_port_t    ioreq_evtchn;
> >  };
> >
> > -#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
> > +#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_WP + 1)
> >  #define MAX_NR_IO_RANGES  256
> 
> So in the end you didn't even find it necessary to bump the limit on the
> number of ranges per domain? Iirc that was your major objection against
> using existing infrastructure.
> 
I make an experiment to collect the total amount page of ppgtt that will be 
protected in one time on the HWS platform. The result show that 512 pages needs 
to be protected. Though it's larger than the limitation of rangeset, 
fortunately, most pages are always continuous. So the many pages will be merged 
into one range. So the total  range count is much less than the limitation. 
And thanks for your remaindering to use the existing rangeset. 

Regards
Wei


> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.