[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen crashing when killing a domain with no VCPUs allocated



Hi,

On 07/23/2014 04:31 PM, Jan Beulich wrote:
>>>> On 21.07.14 at 14:57, <dario.faggioli@xxxxxxxxxx> wrote:
>> On lun, 2014-07-21 at 12:46 +0100, Julien Grall wrote:
>>> On 07/21/2014 11:33 AM, George Dunlap wrote:
>>>> On 07/18/2014 09:26 PM, Julien Grall wrote:
>>
>>>>> diff --git a/xen/common/schedule.c b/xen/common/schedule.c
>>>>> index e9eb0bc..c44d047 100644
>>>>> --- a/xen/common/schedule.c
>>>>> +++ b/xen/common/schedule.c
>>>>> @@ -311,7 +311,7 @@ int sched_move_domain(struct domain *d, struct
>>>>> cpupool *c)
>>>>>       }
>>>>>         /* Do we have vcpus already? If not, no need to update
>>>>> node-affinity */
>>>>> -    if ( d->vcpu )
>>>>> +    if ( d->vcpu && d->vcpu[0] != NULL )
>>>>>           domain_update_node_affinity(d);
>>>>
>>
>>>> Overall it seems like those checks for the existence of cpus should be
>>>> moved into domain_update_node_affinity().  The ASSERT() there I think is
>>>> just a sanity check to make sure we're not getting a ridiculous result
>>>> out of our calculation; but of course if there actually are no vcpus,
>>>> it's not ridiculous at all.
>>>>
>>>> One solution might be to change the ASSERT to
>>>> ASSERT(!cpumask_empty(dom_cpumask) || !d->vcpu || !d->vcpu[0]).  Then we
>>>> could probably even remove the d->vcpu conditional when calling it.
>>>
>>> This solution also works for me. Which change do you prefer?
>>>
>> FWIW, I think I like changing the ASSERT() in
>> domain_update_node_affinity(), as George suggested (and perhaps with the
>> reordering Andrew suggested) better.
> 
> +1

Thanks. I will send a patch during the next couple days to fix this issue.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.