|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v8 for-xen-4.5 2/2] dpci: Replace tasklet with an softirq (v8)
>>> On 28.10.14 at 21:07, <konrad.wilk@xxxxxxxxxx> wrote:
> On Tue, Oct 28, 2014 at 10:43:32AM +0000, Jan Beulich wrote:
>> >>> On 27.10.14 at 22:13, <konrad.wilk@xxxxxxxxxx> wrote:
>> > + /*
>> > + * A crude 'while' loop with us dropping the spinlock and giving
>> > + * the softirq_dpci a chance to run.
>> > + * We MUST check for this condition as the softirq could be scheduled
>> > + * and hasn't run yet. Note that this code replaced tasklet_kill which
>> > + * would have spun forever and would do the same thing (wait to flush
>> > out
>> > + * outstanding hvm_dirq_assist calls.
>> > + */
>> > + if ( pt_pirq_softirq_active(pirq_dpci) )
>> > + {
>> > + spin_unlock(&d->event_lock);
>> > + if ( pirq_dpci->cpu >= 0 && pirq_dpci->cpu != smp_processor_id() )
>> > + {
>> > + /*
>> > + * The 'raise_softirq_for' sets the CPU and raises the
>> > softirq bit
>> > + * so we do not need to set the target CPU's HVM_DPCI_SOFTIRQ.
>> > + */
>> > + smp_send_event_check_cpu(pirq_dpci->cpu);
>> > + pirq_dpci->cpu = -1;
>> > + }
>> > + cpu_relax();
>> > + goto restart;
>> > + }
>>
>> As said in an earlier reply to Andrew, I think this open coding goes
>> too far. And with the softirq known to have got sent, I also don't
>> really see why it needs to be resent _at all_ (and the comments
>> don't explain this either).
>
> In the other emails you and Andrew said:
>
> ">> > Can it ever be the case that we are waiting for a remote pcpu to
> run its
> >> > softirq handler? If so, the time spent looping here could be up
> to 1
> >> > scheduling timeslice in the worst case, and 30ms is a very long
> time to
> >> > wait.
> >>
> >> Good point - I think this can be the case. But there seems to be a
> >> simple counter measure: The first time we get to this point, send an
> >> event check IPI to the CPU in question (or in the worst case
> >> broadcast one if the CPU can't be determined in a race free way).
> >
> "
And I was wrong to agree with Andrew. We can't be waiting for a full
time slice - that would be contrary to the concept of softirqs. Locally
raised ones cause them to be executed before returning back to guest
context; remotely raised ones issue an IPI.
> Which is true. That is what this is trying to address.
>
> But if we use 'cpu_raise_softirq' which you advocate it would inhibit the
> IPI
> if the HVM_DPCI_SOFTIRQ is set on the remote CPU:
>
> void cpu_raise_softirq(unsigned int cpu, unsigned int nr)
> {
> unsigned int this_cpu = smp_processor_id();
>
> if ( test_and_set_bit(nr, &softirq_pending(cpu)) <=== that will
> be true
> || (cpu == this_cpu)
> || arch_skip_send_event_check(cpu) )
> return;
>
> if ( !per_cpu(batching, this_cpu) || in_irq() )
> smp_send_event_check_cpu(cpu);
> else
> set_bit(nr, &per_cpu(batch_mask, this_cpu));
> }
>
> In which case we still won't be sending the IPI.
And we don't need to, as it was already done by whoever set that flag.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |