[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 3/6] xen: Add NUMA support to Xen
This patch modifies the increase_reservation and populate_physmap hypercalls used to allocate memory to a domain. With numa support enabled we balance the allocation by using the domain's vcpu placement as a method of distributing the pages locally to the physical cpu the vcpus will run upon. -- Ryan Harper Software Engineer; Linux Technology Center IBM Corp., Austin, Tx (512) 838-9253 T/L: 678-9253 ryanh@xxxxxxxxxx diffstat output: memory.c | 32 ++++++++++++++++++++++++++++++++ 1 files changed, 32 insertions(+) Signed-off-by: Ryan Harper <ryanh@xxxxxxxxxx> Signed-off-by: Ryan Grimm <grimm@xxxxxxxxxx> --- # HG changeset patch # User Ryan Harper <ryanh@xxxxxxxxxx> # Node ID eda82207d4bf72df7acd43bfb937fcc39313bd0d # Parent e258ba216530fc45a74da2383d07e60f97974bdc This patch modifies the increase_reservation and populate_physmap hypercalls used to allocate memory to a domain. With numa support enabled we balance the allocation by using the domain's vcpu placement as a method of distributing the pages locally to the physical cpu the vcpus will run upon. Signed-off-by: Ryan Harper <ryanh@xxxxxxxxxx> Signed-off-by: Ryan Grimm <grimm@xxxxxxxxxx diff -r e258ba216530 -r eda82207d4bf xen/common/memory.c --- a/xen/common/memory.c Mon May 1 21:40:13 2006 +++ b/xen/common/memory.c Mon May 1 21:42:00 2006 @@ -40,6 +40,14 @@ struct page_info *page; unsigned long i, mfn; +#ifdef CONFIG_NUMA + int max_vcpu_id = 0; + struct vcpu *v; + + for_each_vcpu (d, v) + if ( v->vcpu_id > max_vcpu_id ) + max_vcpu_id = v->vcpu_id; +#endif if ( !guest_handle_is_null(extent_list) && !guest_handle_okay(extent_list, nr_extents) ) return 0; @@ -56,8 +64,16 @@ return i; } +#ifdef CONFIG_NUMA + /* spread each allocation across the total number of + * vcpus allocated to this domain */ + if ( unlikely((page = __alloc_domheap_pages( d, + (d->vcpu[i % (max_vcpu_id+1)])->processor, + extent_order, flags )) == NULL) ) +#else if ( unlikely((page = alloc_domheap_pages( d, extent_order, flags)) == NULL) ) +#endif { DPRINTK("Could not allocate order=%d extent: " "id=%d flags=%x (%ld of %d)\n", @@ -89,6 +105,14 @@ struct page_info *page; unsigned long i, j, gpfn, mfn; +#ifdef CONFIG_NUMA + int max_vcpu_id = 0; + struct vcpu *v; + + for_each_vcpu (d, v) + if ( v->vcpu_id > max_vcpu_id ) + max_vcpu_id = v->vcpu_id; +#endif if ( !guest_handle_okay(extent_list, nr_extents) ) return 0; @@ -107,8 +131,16 @@ if ( unlikely(__copy_from_guest_offset(&gpfn, extent_list, i, 1)) ) goto out; +#ifdef CONFIG_NUMA + /* spread each allocation across the total number of + * vcpus allocated to this domain */ + if ( unlikely((page = __alloc_domheap_pages( d, + (d->vcpu[i % (max_vcpu_id+1)])->processor, + extent_order, flags )) == NULL) ) +#else if ( unlikely((page = alloc_domheap_pages( d, extent_order, flags)) == NULL) ) +#endif { DPRINTK("Could not allocate order=%d extent: " "id=%d flags=%x (%ld of %d)\n", _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |