[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] libxl: Increase device model startup timeout to 1min.
On Fri, Jun 26, 2015 at 05:41:07PM +0100, Ian Campbell wrote: > On Fri, 2015-06-26 at 12:57 +0100, Anthony PERARD wrote: > > On a busy host, QEMU may take more than 10s to start. > > Please show your working here so that in 2 years when we want to know > why this value was chosen we don't have to go scrobbling around in the > ML archives looking for the data you gathered. OK. What about: While running the whole stack of OpenStack (install via devstack on Ubuntu) on a single machine, sometime the device model fail to start on time. When this happen, a strace of QEMU shows that it still loading its libraries. You can find such strace on: http://lists.xenproject.org/archives/html/xen-devel/2015-06/msg03549.html What shows the strace is that between a mmap() syscall and the next syscall, 0.5s might have pass. Normaly, on this same machine it takes 0.32 second on average for QEMU to start, while running Tempest (which is the OpenStack test suite). Those QEMU are runned as backend for a PV guest. > > > > Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> > > --- > > tools/libxl/libxl_internal.h | 2 +- > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h > > index e96d6b5..3a604f7 100644 > > --- a/tools/libxl/libxl_internal.h > > +++ b/tools/libxl/libxl_internal.h > > @@ -85,7 +85,7 @@ > > #define LIBXL_INIT_TIMEOUT 10 > > #define LIBXL_DESTROY_TIMEOUT 10 > > #define LIBXL_HOTPLUG_TIMEOUT 10 > > -#define LIBXL_DEVICE_MODEL_START_TIMEOUT 10 > > +#define LIBXL_DEVICE_MODEL_START_TIMEOUT 60 > > #define LIBXL_STUBDOM_START_TIMEOUT 30 > > #define LIBXL_QEMU_BODGE_TIMEOUT 2 > > #define LIBXL_XENCONSOLE_LIMIT 1048576 > > -- Anthony PERARD _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |