[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] 32bit xen and "claim"

> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Subject: RE: 32bit xen and "claim"
> >>> On 01.11.12 at 21:55, Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> wrote:
> >> >when prototyping the "claim" hypercall/subop, can I assume
> >> >that the CONFIG_X86 code in the hypervisor and, specifically
> >> >any separation of the concepts of xen_heap from dom_heap,
> >> >can be ignored?
> >>
> >> No, you shouldn't. Once adding support for memory amounts beyond 5Tb
> >> I expect the separation to become meaningful even for x86-64.
> >
> > On quick scan, I don't see anything obvious in the archives
> > that explains why 5Tb is the limit (rather than, say, 4Tb
> > or 8Tb, or some other power of two).  Could you provide a
> > pointer to this info or, if you agree it is non-obvious and
> > undocumented, say a few words of explanation?
> xen/include/asm-x86/config.h:DIRECTMAP_* (and the comment
> preceding all those #define-s).

Sorry if this is a silly question, but the hex value
0xffff880000000000 looks like a Linux kernel base address
so it made me wonder:

Do I understand correctly that the 5TB limit only applies
because of legacy PV guests?
> > Also, just wondering, should exceeding 5Tb be on the 4.3
> > features list or is >5Tb physical memory still too far away?
> It _is_ on the feature list. So far I just had way too many other
> issues to deal with, preventing me to get started with that work.

Sorry, I should have looked that up first. :-}

Does it make sense to have a runtime option that unsets the
physical limit but disallows legacy PV guests?  If this
defaults to false for machines with RAM<=5TB but to true
for machines with RAM>5TB, then the feature is "done"
(AND we have put a stake in the ground to begin the
slow obsolescence of PV functionality).

Just an idea...


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.