[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/domctl: lower loglevel of XEN_DOMCTL_memory_mapping



>>> On 09.09.15 at 17:19, <malcolm.crossley@xxxxxxxxxx> wrote:
> On 09/09/15 15:50, Konrad Rzeszutek Wilk wrote:
>> On Wed, Sep 09, 2015 at 08:33:52AM -0600, Jan Beulich wrote:
>>>>>> On 09.09.15 at 16:20, <konrad.wilk@xxxxxxxxxx> wrote:
>>>> Perhaps the solution is remove the first printk(s) and just have them
>>>> once the operation has completed? That may fix the outstanding tasklet
>>>> problem?
>>>
>>> Considering that this is a tool stack based retry, how would the
>>> hypervisor know when the _whole_ operation is done?
>> 
>> I was merely thinking of moving the printk _after_ the map_mmio_regions
>> so there wouldn't be any outstanding preemption points in map_mmio_regions
>> (so it can at least do the 64 PFNs).
>> 
>> But going forward a couple of ideas:
>> 
>>  - The 64 limit was arbitrary. It could have been 42 or PFNs / 
>> num_online_cpus(),
>>    or actually finding out the size of the BAR and figuring the optimal
>>    case so that it will be done under 1ms. Or perhaps just provide an
>>    boot time parameter for those that really are struggling with this.
> 
> The issue of it taking a long time to map a large BAR is caused by the 
> unconditional
> memory_type_changed call at the bottom of XEN_DOMCTL_memory_mapping 
> function.
> 
> The memory_type_changed call results in a full data cache flush on VM with a 
> PCI device
> assigned.
> 
> We have seen this overhead cause a 1GB BAR to take 20 seconds to map it's 
> MMIO.
> 
> If the 64 limit was arbitrary then I would suggest increasing it to at least 
> 1024 so that
> at least 4M of BAR can be mapped in one go and it reduces the overhead by a 
> factor of 16.

1024 may be a little much, but 256 is certainly a possibility, plus
Konrad's suggestion to allow this limit to be controlled via command
line option.

> Currently the toolstack attempts lower and lower powers of 2 size's of the 
> MMIO region to be mapped
> until the hypercall succeeds.
> 
> Is it not clear to me why we have an unconditional call to 
> memory_type_changed in the domctl.
> Can somebody explain why it can't be made condition on errors?

You mean only on success? That may be possible (albeit as you
can see from the commit introducing it I wasn't sure, and I'm still
not sure), but wouldn't buy you anything.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.