[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] pre-reservation of memory for domain creation
Keir and Jan, Ignore the previous mail, the patch in the text is incomplete... I am working on the issue "pre-reservation of memory for domain creation". Now I have the following findings. Currently guest initialization process in xend (XendDomainInfo.py) is: _constructDomain() --> domain_create() --> domain_max_vcpus() ... --> _initDomain() --> shadow_mem_control() ... In domain_create, previously we reserve 1M memory for domain creation (as described in xend comment), and these memory SHOULD NOT related with vcpu number. And later, shadow_mem_control() will modify the shadow size to 256 pages per vcpu (also plus some other values related with guest memory size...). Therefore the C/S 20389 which modifies 1M to 4M to fit more vcpu number is wrong. I'm sorry for that. Following is the reason why currently 1M doesn't work for big number vcpus, as we mentioned, it caused Xen crash. Each time when sh_set_allocation() is called, it checks whether shadow_min_acceptable_pages() has been allocated, if not, it will allocate them. That is to say, it is 128 pages per vcpu. But before we define d->max_vcpu, guest vcpu hasn't been initialized, so shadow_min_acceptable_pages() always returns 0. Therefore we only allocated 1M shadow memory for domain_create, and didn't satisfy 128 pages per vcpu for alloc_vcpu(). As we know, vcpu allocation is done in the hypercall of XEN_DOMCTL_max_vcpus. However, at this point we haven't called shadow_mem_control() and are still using the pre-allocated 1M shadow memory to allocate so many vcpus. So it should be a BUG. Therefore when vcpu number increases, 1M is not enough and causes Xen crash. C/S 20389 exposes this issue. So I think the right process should be, after d->max_vcpu is set and before alloc_vcpu(), we should call sh_set_allocation() to satisfy 128 pages per vcpu. The following patch does this work. Is it work for you? Thanks! Best Regards, -- Dongxiao Signed-off-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx> diff -r 13d4e78ede97 xen/arch/x86/mm/shadow/common.c --- a/xen/arch/x86/mm/shadow/common.c Wed Jan 13 08:33:34 2010 +0000 +++ b/xen/arch/x86/mm/shadow/common.c Thu Jan 14 14:02:23 2010 +0800 @@ -41,6 +41,9 @@ DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags); +static unsigned int sh_set_allocation(struct domain *d, + unsigned int pages, + int *preempted); /* Set up the shadow-specific parts of a domain struct at start of day. * Called for every domain from arch_domain_create() */ void shadow_domain_init(struct domain *d, unsigned int domcr_flags) @@ -82,6 +85,12 @@ void shadow_vcpu_init(struct vcpu *v) } #endif + if ( !is_idle_domain(v->domain) ) + { + shadow_lock(v->domain); + sh_set_allocation(v->domain, 128, NULL); + shadow_unlock(v->domain); + } v->arch.paging.mode = &SHADOW_INTERNAL_NAME(sh_paging_mode, 3); } @@ -3099,7 +3108,7 @@ int shadow_enable(struct domain *d, u32 { unsigned int r; shadow_lock(d); - r = sh_set_allocation(d, 1024, NULL); /* Use at least 4MB */ + r = sh_set_allocation(d, 256, NULL); /* Use at least 1MB */ if ( r != 0 ) { sh_set_allocation(d, 0, NULL); Jan Beulich wrote: >>>> "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx> 13.01.10 03:34 >>> >> If we didn't add this change, as Keir said, Xen will crash during >> destruction of partially-created domain. > > If this indeed is reproducible, I think it should be fixed. > >> However I didn't noticed the toolstack and >> shadow_min_acceptable_pages() side at that time... >> For now, should we adjust the shadow pre-alloc size to match >> shadow_min_acceptable_pages() and modify toolstack accordingly? > > I would say so, just with the problem that I can't reliable say what > "accordingly" here would be (and hence I can't craft a patch I can > guarantee will work at least in most of the cases). > > And as said before, I'm also not convinced that using the maximum > possible number of vCPU-s for this initial calculation is really the > right thing to do. > > Jan Best Regards, -- Dongxiao Attachment:
shadow.patch _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |