[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1 of 4] Prevent low values of max_pages for domains doing sharing or paging



>>> On 16.02.12 at 17:08, Tim Deegan <tim@xxxxxxx> wrote:
> At 15:32 +0000 on 16 Feb (1329406348), Jan Beulich wrote:
>> >>> On 16.02.12 at 15:45, "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx> 
>> >>> wrote:
>> > But I've seen squeezed set criminally low max_pages value (i.e. 256).
>> > Granted, this is squeezed's problem, but shouldn't some sanity checking be
>> > wired into the hypervisor? Why should we even allow max_pages < tot_pages?
>> 
>> The only lower boundary that the hypervisor could (perhaps should)
>> enforce is that of not being able to guarantee guest forward progress:
>> On x86, up to 2 text pages plus up to 4 data pages, times the number
>> of page table levels (i.e. 24 pages on 64-bit).
> 
> OT: ISTR the actual number of memory accesses needed to complete one x86
> instruction is much crazier than that (e.g. a memory-to-memory copy that
> faults on the second page of the destination, pushing most of a stack
> frame before hitting the SS limit and double-faulting via a task gate,
> &c).  We did try to count it exactly for the shadow pagetables once but
> gave up and wildly overestimated instead, (because perf near the limit
> would have been too awful anyway).

Hmm, indeed: When a page fault occurs, you don't need the text or
data pages the original instruction referenced anymore. Instead, you
now need up to 2 IDT pages, up to 4 GDT pages, and up to 2 stack
pages. So yes, the number is higher (but not that much). An exception
going through a task gate is of course much uglier, but luckily that
doesn't need to be considered at least for x86-64.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.