[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/7] xen: Fix vcpus initialisation.



On Mon, 17 Jun 2013, Anthony PERARD wrote:
> Each vcpu needs a call to xc_evtchn_bind_interdomain in QEMU, even those
> that are unplug at QEMU initialisation.
> 
> Without this patch, any hot-plugged CPU will be "Stuck ??" when Linux
> will try to use them.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>

Acked-by: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>


>  xen-all.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/xen-all.c b/xen-all.c
> index daf43b9..15be8ed 100644
> --- a/xen-all.c
> +++ b/xen-all.c
> @@ -632,13 +632,13 @@ static ioreq_t *cpu_get_ioreq(XenIOState *state)
>      }
>  
>      if (port != -1) {
> -        for (i = 0; i < smp_cpus; i++) {
> +        for (i = 0; i < max_cpus; i++) {
>              if (state->ioreq_local_port[i] == port) {
>                  break;
>              }
>          }
>  
> -        if (i == smp_cpus) {
> +        if (i == max_cpus) {
>              hw_error("Fatal error while trying to get io event!\n");
>          }
>  
> @@ -1123,10 +1123,10 @@ int xen_hvm_init(void)
>          hw_error("map buffered IO page returned error %d", errno);
>      }
>  
> -    state->ioreq_local_port = g_malloc0(smp_cpus * sizeof (evtchn_port_t));
> +    state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
>  
>      /* FIXME: how about if we overflow the page here? */
> -    for (i = 0; i < smp_cpus; i++) {
> +    for (i = 0; i < max_cpus; i++) {
>          rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid,
>                                          xen_vcpu_eport(state->shared_page, 
> i));
>          if (rc == -1) {
> -- 
> Anthony PERARD
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.