[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.



Thanks for your reply, Ian.

On 2/2/2016 1:05 AM, Ian Jackson wrote:
Yu, Zhang writes ("Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter 
max_wp_ram_ranges."):
On 2/2/2016 12:35 AM, Jan Beulich wrote:
On 01.02.16 at 17:19, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
So, we need also validate this param in hvm_allow_set_param,
current although hvm_allow_set_param has not performed any
validation other parameters. We need to do this for the new
ones. Is this understanding correct?

Yes.

Another question is: as to the tool stack side, do you think
an error message would suffice? Shouldn't xl be terminated?

I have no idea what consistent behavior in such a case would
be - I'll defer input on this to the tool stack maintainers.

Thank you.
Wei, which one do you prefer?

I think that arrangements should be made for the hypercall failure to
be properly reported to the caller, and properly logged.

I don't think it is desirable to duplicate the sanity check in
xl/libxl/libxc.  That would simply result in there being two limits to
update.


Sorry, I do not follow. What does "being two limits to update" mean?

I have to say, though, that the situation with this parameter seems
quite unsatisfactory.  It seems to be a kind of bodge.


By "situation with this parameter", do you mean:
a> the introduction of this parameter in tool stack, or
b> the sanitizing for this parameter(In fact I'd prefer not to treat
the check of this parameter as a sanitizing, cause it only checks
the input against 4G to avoid data missing from uint64 to uint32
assignment in hvm_ioreq_server_alloc_rangesets)?



The changeable limit is there to prevent excessive resource usage by a
guest.  But the docs suggest that the excessive usage might be
normal.  That sounds like a suboptimal design to me.


Yes, there might be situations that this limit be set to some large
value. But I that situation would be very rare. Like the docs
suggested, for XenGT, 8K is a big enough one for most cases.

For reference, here is the docs proposed in this patch:

   =item B<max_wp_ram_ranges=N>

   Limit the maximum write-protected ram ranges that can be tracked
   inside one ioreq server rangeset.

   Ioreq server uses a group of rangesets to track the I/O or memory
   resources to be emulated. Default limit of ranges that one rangeset
   can allocate is set to a small value, due to the fact that these
   ranges are allocated in xen heap. Yet for the write-protected ram
   ranges, there are circumstances under which the upper limit inside
   one rangeset should exceed the default one. E.g. in Intel GVT-g,
   when tracking the PPGTT(per-process graphic translation tables) on
   Intel broadwell platforms, the number of page tables concerned will
   be of several thousand.

   For Intel GVT-g broadwell platform, 8192 is a suggested value for
   this parameter in most cases. But users who set his item explicitly
   are also supposed to know the specific scenarios that necessitate
   this configuration. Especially when this parameter is used:
   1> for virtual devices other than vGPU in GVT-g;
   2> for GVT-g, there also might be some extreme cases, e.g. too many
   graphic related applications in one VM, which create a great deal of
   per-process graphic translation tables;
   3> for GVT-g, future cpu platforms which provide even more per-process
   graphic translation tables.

Having said that, if the hypervisor maintainers are happy with a
situation where this value is configured explicitly, and the
configurations where a non-default value is required is expected to be
rare, then I guess we can live with it.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


Thanks
Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.