[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-ia64-devel][Patch] New memory initial interface for VTI


  • To: "Alex Williamson" <alex.williamson@xxxxxx>
  • From: "Zhang, Xing Z" <xing.z.zhang@xxxxxxxxx>
  • Date: Tue, 21 Nov 2006 18:14:25 +0800
  • Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 21 Nov 2006 02:14:32 -0800
  • List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
  • Thread-index: AccM1namet4hlY+ARLmDzjiPxoO2nwAeSAZg
  • Thread-topic: [Xen-ia64-devel][Patch] New memory initial interface for VTI

Hi Alex:
        Thanks your suggestion. I do some changes.
GFW_PAGES saved due to it used several times in code.

Signed-off-by, Zhang Xin < xing.z.zhang@xxxxxxxxx>
        

Good good study,day day up ! ^_^
-Wing(zhang xin)
 
OTC,Intel Corporation

-----Original Message-----
From: Alex Williamson [mailto:alex.williamson@xxxxxx] 
Sent: 2006年11月21日 3:02
To: Zhang, Xing Z
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-ia64-devel][Patch] New memory initial interface for VTI

Hi Wing,

   A few comments...

On Mon, 2006-11-20 at 12:51 +0800, Zhang, Xing Z wrote:
> New initial memory interface for ia64
> Use xc_domain_memory_populate_physmap() allocate memory and 
> build P2M/M2P table in setup_guest(). so vmx_build_physmap_table()
> only mark IO space now.
> 
> Signed-off-by, Zhang Xin < xing.z.zhang@xxxxxxxxx>
> diff -r ac5330d4945a tools/libxc/ia64/xc_ia64_hvm_build.c
> --- a/tools/libxc/ia64/xc_ia64_hvm_build.c      Wed Nov 15 12:15:34
> 2006 -0700
> +++ b/tools/libxc/ia64/xc_ia64_hvm_build.c      Sat Nov 18 03:18:56
> 2006 +0800
> @@ -546,19 +546,31 @@ add_pal_hob(void* hob_buf)
>      return 0;
>  }
>  
> +#define BELOW_3G_MEM_END (3 * MEM_G)
> +#define MMIO_AND_RESERVED_MEM (1 * MEM_G)
> +#define GFW_PAGES ((16 * MEM_M) >> PAGE_SHIFT)

   Should these be in xen/include/public/arch-ia64.h?  Some of these
seem redundant with things that are already in there:

BELOW_3G_MEM_END == MMIO_START
MMIO_AND_RESERVED_MEM == GFW_START + GFW_SIZE - MMIO_START
GFW_PAGES == (GFW_SIZE >> PAGE_SHIFT)


> +/*
> + In this function, we will allocate memory and build P2M/M2P table
> for VTI guest
> + Frist, a pfn list will be initialized discontinuous, normal memory
> begins with 0, 
> + GFW memory and other three pages at their place defined in
> xen/include/public/arch-ia64.h 
> + xc_domain_memory_populate_physmap() called three times, to set
> parameter 'extent_order' to
> + different value, this is convenient to allocate continuous memory
> with different size.
> + */

   Watch line wrapping (80 columns or less).


> -    /* This will creates the physmap.  */
> +    if ( (pfn_list = malloc(nr_pages * sizeof(xen_pfn_t))) == NULL )

   Please separate the function call from the test, ie:

pfn_list = malloc(nr_pages * sizeof(xen_pfn_t));
if (pfn_list == NULL) ...

> +    //Allocate memory for VTI guest, skipping VGA hole
> 0xA0000-0xC0000. 
> +    rc = xc_domain_memory_populate_physmap(
> +        xc_handle, dom, (normal_pages > 0x28) ? 0x28 : normal_pages,
> +        0, 0, &pfn_list[0x00]);
> +    if ( (rc == 0) && (nr_pages > 0x30) )

   This test is a little strange.  What would we do with a HVM domain
that only wants 640k of memory?  Should there be a test for some minimum
domain size to be considered viable?

> +        rc = xc_domain_memory_populate_physmap(
> +            xc_handle, dom, normal_pages - 0x30, 0, 0,
> &pfn_list[0x30]);

   Please us macros rather than magic numbers, ex:

#define VGA_START_PAGE  (VGA_IO_START >> PAGE_SHIFT)
#define VGA_END_PAGE    ((VGA_IO_START + VGA_IO_SIZE) >> PAGE_SHIFT)

Thanks,

        Alex

-- 
Alex Williamson                             HP Open Source & Linux Org.

Attachment: new_ia64_mem_interface.patch
Description: new_ia64_mem_interface.patch

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.