[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 6/8] xen/arm: introduce GNTTABOP_cache_flush



On 16/10/14 18:29, Stefano Stabellini wrote:
> On Thu, 16 Oct 2014, David Vrabel wrote:
>> On 16/10/14 15:45, Stefano Stabellini wrote:
>>> Introduce a new hypercall to perform cache maintenance operation on
>>> behalf of the guest. The argument is a machine address and a size. The
>>> implementation checks that the memory range is owned by the guest or the
>>> guest has been granted access to it by another domain.
>>>
>>> Introduce grant_map_exists: an internal grant table function to check
>>> whether an mfn has been granted to a given domain on a target grant
>>> table.
>>>
>>> As grant_map_exists loops over all the guest grant table entries, limit
>>> DEFAULT_MAX_NR_GRANT_FRAMES to 10 to cap the loop to 5000 iterations
>>> max. Warn if the user sets max_grant_frames higher than 10.
>>
>> No.  This is much too low.
>>
>> A netfront with 4 queues wants 4 * 2 * 256 = 2048 grant references. So
>> this limit would only allow for two VIFs which is completely unacceptable.
>>
>> blkfront would be similarly constrained.
> 
> 10 is too low, that is a good point, thanks!
> 
> 
>> I think you're going to have to add continuations somehow or you are
>> going to have abandon this approach and use the SWIOTLB in the guest.
> 
> Actually the latest version already supports continuations (even though
> I admit I didn't explicitly test it).

I meant pre-empting the hypercall when it is in the middle of iterating
through the grant table.  I think it would be safe to drop the grant
table lock and resume the scan part way through since the relevant grant
should not change during the cache flush call (if it does then the flush
wiil at worst safely fail).

> In any case the default max (32 frames) means 16000 iterations, that
> would take around 32000ns on a not very modern system. I don't think is
> that bad. And of course it is just the theoretical worst case.
> 
> Maybe we should leave the default max as is?

At a bare minimum, yes.  But I really don't think we should be /adding/
hard scalability limits to Xen -- we should be removing them!

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.