|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH V2 1/4] xen/hap: Increase hap page pool size for more vcpus support
>>> On 01.09.17 at 10:19, <tianyu.lan@xxxxxxxxx> wrote:
> On 2017年08月31日 21:56, Andrew Cooper wrote:
>> On 31/08/17 06:01, Lan Tianyu wrote:
>>> This patch is to increase hap page pool size to support more vcpus in
>>> single
> VM.
>>>
>>> Signed-off-by: Lan Tianyu <tianyu.lan@xxxxxxxxx>
>>> ---
>>> xen/arch/x86/mm/hap/hap.c | 10 +++++++++-
>>> 1 file changed, 9 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
>>> index cdc77a9..96a7ed0 100644
>>> --- a/xen/arch/x86/mm/hap/hap.c
>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>> @@ -464,6 +464,7 @@ void hap_domain_init(struct domain *d)
>>> int hap_enable(struct domain *d, u32 mode)
>>> {
>>> unsigned int old_pages;
>>> + unsigned int pages;
>>> unsigned int i;
>>> int rv = 0;
>>>
>>> @@ -473,7 +474,14 @@ int hap_enable(struct domain *d, u32 mode)
>>> if ( old_pages == 0 )
>>> {
>>> paging_lock(d);
>>> - rv = hap_set_allocation(d, 256, NULL);
>>> +
>>> + /* Increase hap page pool with max vcpu number. */
>>> + if ( d->max_vcpus > 128 )
>>> + pages = 256;
>>> + else
>>> + pages = 512;
>>> +
>>> + rv = hap_set_allocation(d, pages, NULL);
>>
>> What effect is this intended to have? hap_enable() is always called
>> when d->max_vcpus is 0.
>
> Sorry. I didn't notice that max_vcpus wasn't set at that point.I hope to
> allocate hap pages according vcpu number. This means we don't know how
> many vcpu will be used when allocate hap pages during creating domain,
> right? If that, we have to increase page number unconditionally.
But that you were already told isn't really acceptable. Did you
consider calling hap_set_allocation() another time once vCPU
count was set?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |