[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm: Fix smp_send_call_function_mask() for current CPU

Hi Julien,

On 14 August 2014 18:14, Julien Grall <julien.grall@xxxxxxxxxx> wrote:
> Hi Anup,
> On 08/14/2014 12:47 PM, Anup Patel wrote:
>> The smp_send_call_function_mask() does not work on Foundation v8
>> model with one CPU. The reason being gicv2_send_SGI() is called
>> with irqmode==SGI_TARGET_LIST and *cpu_mask=0x1 on CPU0 which
>> does not work on Foundation v8 model.
> Please provide any steps, trace that make you think that irqmode ==
> SGI_TARGET_LIST and *cpu_mask=0x1 is not working on Foundation V8 Model.
>> Further, it is really strange that smp_send_call_function_mask()
>> depends on GIC SGIs for calling function on current CPU.
> Why it's strange??? The GIC specification doesn't seem to add any
> restriction about sending an SGI to the current CPU.
> It clearly looks like a bug in another part of Xen. And I doubt it's
> because the Foundation Model is not able to support the use case above.
> Without any further explanation than "It doesn't work" and "It's
> strange", I don't think this patch should be accepted in Xen.
> You need at least to point the paragraph in the spec...

Its strange because for calling function on current CPU the
implementation will:
Trigger SGI for current CPU => Xen takes interrupt on Current CPU
=> IPI interrupt handler will call smp_call_function_interrupt()

The above can be simplified for current CPU by straight away
calling smp_call_function_interrupt() for current CPU.

If the current implementation does not sound strange and
sub-optimal to you then I have no further comments.

>> This patch fixes smp_send_call_function_mask() for current CPU
>> by directly calling smp_call_function_interrupt() on current CPU.
>> This is very similar to what Xen x86 does.
> What was done in x86 may not make sense on ARM....

Oh, really. Please see the explanation above.

In fact, Linux and other hypervisors directly call SMP function
for current CPU. I don't see how Xen ARM is different.

I would suggest you provide proper explanation before
making statements like: "What was done in x86 may not
make sense on ARM....."

> For me the current code is valid... So far, I didn't see any issue on
> different boards. I've also used recently the Foundation v8 Model [1],
> without any issue.

What is valid need not be optimal so I don't buy this argument.

I agree that Xen ARM64 not working for me on Foundation v8
model could potentially be bug in Foundation v8 model but
this is no reason to reject this patch because this patch is
just a small improvement which also happen to fix my problem.

If you still don't agree with above then feel happy to reject this patch.


>> Signed-off-by: Anup Patel <anup.patel@xxxxxxxxxx>
>> Signed-off-by: Pranavkumar Sawargaonkar <pranavkumar@xxxxxxxxxx>
>> ---
>>  xen/arch/arm/smp.c |    7 +++++++
>>  1 file changed, 7 insertions(+)
>> diff --git a/xen/arch/arm/smp.c b/xen/arch/arm/smp.c
>> index 30203b8..4fde3f9 100644
>> --- a/xen/arch/arm/smp.c
>> +++ b/xen/arch/arm/smp.c
>> @@ -20,6 +20,13 @@ void smp_send_event_check_mask(const cpumask_t *mask)
>>  void smp_send_call_function_mask(const cpumask_t *mask)
>>  {
>>      send_SGI_mask(mask, GIC_SGI_CALL_FUNCTION);
>> +
>> +    if ( cpumask_test_cpu(smp_processor_id(), mask) )
>> +    {
>> +        local_irq_disable();
>> +        smp_call_function_interrupt();
>> +        local_irq_enable();
>> +    }
>>  }
> Send SGI mask doesn't remove the current CPU, therefore this will result
> to send twice the SGIs to the current physical CPU.
> While we have a check in smp_call_function_interrupt to avoid spurious
> SGI interrupt. It's not acceptable to rely on it knowingly.
> In general, if we can avoid test for a specific CPU in the code it's better.
> Regards,
> [1]
> http://www.arm.com/products/tools/models/fast-models/foundation-model.php
> --
> Julien Grall

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.