[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] Virtual mem map



Le Lundi 09 Janvier 2006 06:45, Tian, Kevin a écrit :
> From: Tristan Gingold
>
> >Sent: 2006年1月6日 20:58
> >Hi,
> >
> >I am currently thinking about virtual mem map.
> >In Linux, the virtual mem map is (surprise) virtually mapped.
>
> Surprised but efficient and necessary. ;-)
>
> >In Xen, we can use the same approach or we can manually cut the mem map
> > into several pieces using a mechanism similar to paging.
[...]
> So in both cases you mentioned, extra structure to track mapping is
> necessary. Maybe the difference is that you propose to use another simpler
> structure (like a simple array/list) instead of multi-level page table? And
> then modify all memmap related macros (like pfn_valid, page_to_pfn, etc) to
> let them know existence of holes within memmap?
Yes, here are more details on my original propositions:
The structure is a 2-levels access:
* The first access is an access to a table of offsets/length.
* The offset is an offset to the page frame table, length is used only to 
check validity.

I think this structure is simple enough to be fast.

For memory usage:
* Each entry of the first array describes 1GB of memory.  An entry is 32 bits.
  16KB for the first array can describe 2**12 * 2**30 = 2**42 B of memory.
  (Dan's machine physical memory is bellow 2**40).
* I think 1GB of granule is good enough, unless you have a machine with very
  small DIMM.  In this case, we can use 512MB or 256MB instead of 1GB.
* 1GB is 2**16 to 2**18 pages.  Thus, the offset may be 18 bits and the length
  14 bits (to be multiplied by 4).
As a conclusion, the memory footprint is *very* small, maybe too small ?

memmap related macros must be rewritten.

Tristan.


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.