[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Proposed new "memory capacity claim" hypercall/feature



On 08/11/2012 08:54, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:

>>>> On 08.11.12 at 09:18, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
>> On 08/11/2012 08:00, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> 
>>>>>> On 07.11.12 at 23:17, Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> wrote:
>>>> It appears that the attempt to use 2MB and 1GB pages is done in
>>>> the toolstack, and if the hypervisor rejects it, toolstack tries
>>>> smaller pages.  Thus, if physical memory is highly fragmented
>>>> (few or no order>=9 allocations available), this will result
>>>> in one hypercall per 4k page so a 256GB domain would require
>>>> 64 million hypercalls.  And, since AFAICT, there is no sane
>>>> way to hold the heap_lock across even two hypercalls, speeding
>>>> up the in-hypervisor allocation path, by itself, will not solve
>>>> the TOCTOU race.
>>> 
>>> No, even in the absence of large pages, the tool stack will do 8M
>>> allocations, just without requesting them to be contiguous.
>>> Whether 8M is a suitable value is another aspect; that value may
>>> predate hypercall preemption, and I don't immediately see why
>>> the tool stack shouldn't be able to request larger chunks (up to
>>> the whole amount at once).
>> 
>> It is probably to allow other dom0 processing (including softirqs) to
>> preempt the toolstack task, in the case that the kernel was not built with
>> involuntary preemption enabled (having it disabled is the common case I
>> believe?). 8M batches may provide enough returns to user space to allow
>> other work to get a look-in.
> 
> That may have mattered when ioctl-s were run with the big kernel
> lock held, but even 2.6.18 didn't do that anymore (using the
> .unlocked_ioctl field of struct file_operations), which means
> that even softirqs will get serviced in Dom0 since the preempted
> hypercall gets restarted via exiting to the guest (i.e. events get
> delivered). Scheduling is what indeed wouldn't happen, but if
> allocation latency can be brought down, 8M might turn out pretty
> small a chunk size.

Ah, then I am out of date on how Linux services softirqs and preemption? Can
softirqs/preemption occur any time, even in kernel mode, so long as no locks
are held?

I thought softirq-type work only happened during event servicing, only if
the event servicing had interrupted user context (ie, would not happen if
started from within kernel mode). So the restart of the hypercall trap
instruction would be an opportunity to service hardirqs, but not softirqs or
scheduler...

 -- Keir

> If we do care about Dom0-s running even older kernels (assuming
> there ever was a privcmd implementation that didn't use the
> unlocked path), or if we have to assume non-Linux Dom0-s might
> have issues here, making the tool stack behavior kernel kind/
> version dependent without strong need of course wouldn't sound
> very attractive.
> 
> Jan
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.