[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] [Xen-devel] BalloonWorkerThread issue
-- From: R J [mailto:torushikeshj@xxxxxxxxx] Sent: 06 January 2012 21:13 To: Paul Durrant Cc: Konrad Rzeszutek Wilk; annie.li@xxxxxxxxxx; Konrad Rzeszutek Wilk; xen-devel@xxxxxxxxxxxxxxxxxxx; xen-users@xxxxxxxxxxxxxxxxxxx; xen-api@xxxxxxxxxxxxxxxxxxx Subject: Re: [Xen-devel] BalloonWorkerThread issue Hello Paul, Thanks for your email and explanation. I'm trying multiple combinations and found that windows will run stable only if the static max is twice bigger. I did the test on below static min and max 512MB to 2GB <- Pass 1GB to 4 GB <- Some times pass, some time fails 2GB to 4GB <- Pass 2GB to 8 GB <- Pass 4GB to 32 GB <-- Fail 16GB to 32GB <-- Pass So it seems that the problem is not due to size of RAM but its due to difference between them. Is there any defined multiplication factor while initial squeeze down ? Interesting thing is if I start a VM with Static max 32GB and dynamic max, dynamic min 32GB and static min 512MB then it starts fine and is able to boot successfully. The reason here is no ballooning required as target is equal to static max. Once the VM is up and if I set its memory target to 1 GB ( squeezing from 32G to 1G) it works fine. No issue of balloon driver or anything. So I did same for other cases as well where static max and target were same. The result was "pass". Its only the boot process which is hampered. -- I believe top-posting is against etiquette for this list so I won't continue it... I don't think anyone has ever determined a multiplication factor that will cover *any* windows sku... there's too much variation between them. It's not that surprising that ballooning down after boot gives better results since booting will almost certainly require more code and data to be paged in. Paul _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |