[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PULL 3/4] xen: Fix vcpu initialization.
From: Anthony PERARD <anthony.perard@xxxxxxxxxx> Each vcpu need a evtchn binded in qemu, even those that are offline at QEMU initialisation. Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx> Signed-off-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> --- xen-all.c | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/xen-all.c b/xen-all.c index 10af44c..48e881b 100644 --- a/xen-all.c +++ b/xen-all.c @@ -614,13 +614,13 @@ static ioreq_t *cpu_get_ioreq(XenIOState *state) } if (port != -1) { - for (i = 0; i < smp_cpus; i++) { + for (i = 0; i < max_cpus; i++) { if (state->ioreq_local_port[i] == port) { break; } } - if (i == smp_cpus) { + if (i == max_cpus) { hw_error("Fatal error while trying to get io event!\n"); } @@ -1115,10 +1115,10 @@ int xen_hvm_init(MemoryRegion **ram_memory) hw_error("map buffered IO page returned error %d", errno); } - state->ioreq_local_port = g_malloc0(smp_cpus * sizeof (evtchn_port_t)); + state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t)); /* FIXME: how about if we overflow the page here? */ - for (i = 0; i < smp_cpus; i++) { + for (i = 0; i < max_cpus; i++) { rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid, xen_vcpu_eport(state->shared_page, i)); if (rc == -1) { -- 1.7.2.5 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |