|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] libxl: Increase device model startup timeout to 1min.
Anthony PERARD writes ("Re: [PATCH] libxl: Increase device model startup
timeout to 1min."):
> I have tested an increase timeout this night. And here are the result.
>
> The machine is a AMD Opteron(tm) Processor 4284, with 8G of RAM and 8 pCPU.
> It's running Ubuntu 14.04, with Xen 4.4. On top of that, OpenStack have
> been deployed via devstack on a single.
>
> The test is to run Tempest with --concurrency=4. There are 4 tests runned
> in parallel, but they don't necessarly start a VM. When they do, it's a PV
> with 64MB and 1 vCPU and sometime with double amount of RAM.
>
> The stats:
> Tempest run: 22
> Tempest run time for each run: ~3000s
> Tempest number of test: 1143
> after 22 run of tempest:
> QEMU start: 3352
> number of run that took more than 2s: 20
> number of run that took more than 9s: 6
> maximum start time: 10.973713s
>
> I have gathered the QEMU start time by having strace running for each of
> them. I have then look at the time it took from the first syscall
> execve('qemu') until the syscall where QEMU respond on its QMP socket
> (libxl have acknoledge that QEMU is running at that time).
Thanks for this information.
So from what you say it appears that we are running at most 4 copies
of libxl and qemu in parallel, along with at most 4 VMs ?
And out of 3352 qemu starts, we have
<= 2s 3332
>2s <= 9s 14
> 9s 6
?
Do you have any information about the maximum system load in general ?
dom0 load, vcpu overcommit, etc. ? Do you know what the guests are
doing ?
I'm starting to think that this might be a real bug but that the bug
might be "Linux's I/O subsystem sometimes produces appalling latency
under load" (which is hardly news).
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |