[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 for-xen-4.5 2/2] dpci: Replace tasklet with an softirq (v7)



>>> On 29.09.14 at 12:49, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 27/09/14 02:33, Konrad Rzeszutek Wilk wrote:
>> @@ -128,6 +176,25 @@ int pt_irq_create_bind(
>>      }
>>      pirq_dpci = pirq_dpci(info);
>>      /*
>> +     * A crude 'while' loop with us dropping the spinlock and giving
>> +     * the softirq_dpci a chance to run.
>> +     * We MUST check for this condition as the softirq could be scheduled
>> +     * and hasn't run yet. We do this up to one second at which point we
>> +     * give up. Note that this code replaced tasklet_kill which would have
>> +     * spun forever and would do the same thing (wait to flush out
>> +     * outstanding hvm_dirq_assist calls) - said tasklet_kill had been
>> +     * in 'pt_pirq_cleanup_check' (called during pt_irq_destroy_bind)
>> +     * and 'pci_clean_dpci_irq' (domain destroyed).
>> +     */
>> +    if ( pt_pirq_softirq_active(pirq_dpci) )
>> +    {
>> +        spin_unlock(&d->event_lock);
>> +        process_pending_softirqs();
>> +        if ( ( NOW() - start ) > SECONDS(1) )
>> +            return -EAGAIN;
>> +        goto restart;
>> +    }
> 
> 1s seems like overkill here, and far too long to loop in Xen context
> even if we are processing softirqs.
> 
> Why is it so long?  Surely the softirq will be processed after the first
> call to process_pending_softirqs(), or are we possibly waiting on
> another pcpu to run its softirqs?

We had discussed this before, so I'm surprised this 1s value
survived, since iirc we agreed to use an unbounded loop and
refer to tasklet_kill() as to why this is valid (i.e. not dangerous)
to do.

>> +static int __init setup_dpci_softirq(void)
>> +{
>> +    unsigned int cpu;
>> +
>> +    for_each_online_cpu(cpu)
>> +        INIT_LIST_HEAD(&per_cpu(dpci_list, cpu));
> 
> Hmm - its loops like this that make me think we should have a function
> to explicitly initialise the percpu area of cpus at the point at which
> they come up.  Still, this is fine for now, and another item for the
> todo list.

An alternative would be to make the respective initcall a pre-SMP
one (unless it depends on other things that don't get done early
enough).

>> -static void pci_clean_dpci_irqs(struct domain *d)
>> +static int pci_clean_dpci_irqs(struct domain *d)
>>  {
>>      struct hvm_irq_dpci *hvm_irq_dpci = NULL;
>>  
>>      if ( !iommu_enabled )
>> -        return;
>> +        return -ESRCH;
> 
> I realise that our return codes are inconsistent, but can we avoid
> further overloading of return values.  -ESRCH is generally "no such domain"
> 
> Certain IOMMU related failures currently use -ENODEV, which is perhaps
> more appropriate.

+1

>> --- a/xen/include/xen/hvm/irq.h
>> +++ b/xen/include/xen/hvm/irq.h
>> @@ -99,7 +99,7 @@ struct hvm_pirq_dpci {
>>      struct domain *dom;
>>      struct hvm_gmsi_info gmsi;
>>      struct timer timer;
>> -    struct tasklet tasklet;
>> +    struct list_head softirq_list;
>>  };
>>  
>>  void pt_pirq_init(struct domain *, struct hvm_pirq_dpci *);
>> @@ -109,6 +109,7 @@ int pt_pirq_iterate(struct domain *d,
>>                                struct hvm_pirq_dpci *, void *arg),
>>                      void *arg);
>>  
>> +int pt_pirq_softirq_active(struct hvm_pirq_dpci *);
> 
> "struct hvm_pirq_dpci *pirq_dpci" please.

I don't think this is something to insist on, and I think I'm actually
the one having introduced most prototypes without parameter
names that are currently in the tree. I agree names are useful
for parameters of sufficiently generic types, or when there are
multiple parameters of the same type. But when the type name
itself is specific enough, additionally giving it a name is simply
redundant.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.