[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH V3 1/3] Xen: Increase hap/shadow page pool size to support more vcpus support



On Wed, Sep 20, 2017 at 04:13:43PM +0100, Wei Liu wrote:
>On Tue, Sep 19, 2017 at 11:06:26AM +0800, Lan Tianyu wrote:
>> Hi Wei:
>> 
>> On 2017年09月18日 21:06, Wei Liu wrote:
>> > On Wed, Sep 13, 2017 at 12:52:47AM -0400, Lan Tianyu wrote:
>> >> This patch is to increase page pool size when max vcpu number is larger
>> >> than 128.
>> >>
>> >> Signed-off-by: Lan Tianyu <tianyu.lan@xxxxxxxxx>
>> >> ---
>> >>  xen/arch/arm/domain.c    |  5 +++++
>> >>  xen/arch/x86/domain.c    | 25 +++++++++++++++++++++++++
>> >>  xen/common/domctl.c      |  3 +++
>> >>  xen/include/xen/domain.h |  2 ++
>> >>  4 files changed, 35 insertions(+)
>> >>
>> >> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> >> index 6512f01..94cf70b 100644
>> >> --- a/xen/arch/arm/domain.c
>> >> +++ b/xen/arch/arm/domain.c
>> >> @@ -824,6 +824,11 @@ int arch_vcpu_reset(struct vcpu *v)
>> >>      return 0;
>> >>  }
>> >>  
>> >> +int arch_domain_set_max_vcpus(struct domain *d)
>> >> +{
>> >> +    return 0;
>> >> +}
>> >> +
>> >>  static int relinquish_memory(struct domain *d, struct page_list_head 
>> >> *list)
>> >>  {
>> >>      struct page_info *page, *tmp;
>> >> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> >> index dbddc53..0e230f9 100644
>> >> --- a/xen/arch/x86/domain.c
>> >> +++ b/xen/arch/x86/domain.c
>> >> @@ -1161,6 +1161,31 @@ int arch_vcpu_reset(struct vcpu *v)
>> >>      return 0;
>> >>  }
>> >>  
>> >> +int arch_domain_set_max_vcpus(struct domain *d)
>> > 
>> > The name doesn't match what the function does.
>> > 
>> 
>> I originally hoped to introduce a hook for each arch when set max vcpus.
>> Each arch function can do customized thing and so named
>> "arch_domain_set_max_vcpus".
>> 
>> How about "arch_domain_setup_vcpus_resource"?
>
>Before you go away and do a lot of work, please let us think about if
>this is the right approach first.
>
>We are close to freeze, with the amount of patches we receive everyday
>RFC patch like this one is low on my (can't speak for others) priority
>list. I am not sure when I will be able to get back to this, but do ping
>us if you want to know where things stand.

Hi, Wei.

The goal of this patch is to avoid running out of shadow pages. The
number of shadow pages is initialized (to 256 for hap, and 1024 for
shadow) when creating domain. Then the max vcpus is set. In this
process, for each vcpu, construct_vmcs()->paging_update_paging_modes()
->hap_make_monitor_table() always consume a shadow page. If there are
too many vcpus (i.e. more than 256), we would run out of shadow pages.

To address this, there are three solutions:
1) bump up the number of shadow pages to a proper value when setting
max vcpus like what this patch does. Actually, it can be done in
toolstack via XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION.
2) toolstack (seeing libxl__arch_domain_create->xc_shadow_control())
enlarges or shrinks the shadow memory to another size (according xl.cfg,
the size is 1MB per guest vCPU plus 8KB per MB of guest RAM) after
setting max vcpus. If the sequence of the two operations can be exchanged,
this issue also disappears.
3) Considering that toolstack finally adjusts the shadow memory to a
proper size, I think enlarging the shadow pages from 256 to 512 just
like what the v1 patch
(https://lists.xenproject.org/archives/html/xen-devel/2017-08/msg03048.html)
does doesn't lead to more memory consumption. Since it introduces
minimal change, I prefer to this one.

Which one do you think is better?

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.