[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: Still struggling with HVM: tx timeouts on emulated nics



On 30.09.2011 18:06, Stefan Bader wrote:
> On 30.09.2011 16:09, Stefano Stabellini wrote:
>> @@ -270,6 +271,18 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>>          if ( !is_hvm_domain(v->domain) ||
>>               domain_pirq_to_irq(v->domain, eoi.irq) > 0 )
>>              pirq_guest_eoi(pirq);
>> +        if ( is_hvm_domain(v->domain) &&
>> +                domain_pirq_to_emuirq(v->domain, eoi.irq) > 0 )
>> +        {
>> +            struct hvm_irq *hvm_irq = &v->domain->arch.hvm_domain.irq;
>> +            int gsi = domain_pirq_to_emuirq(v->domain, eoi.irq);
>> +
>> +            /* if this is a level irq and count > 0, send another
>> +             * notification */ 
>> +            if ( gsi >= NR_ISAIRQS /* ISA irqs are edge triggered */
>> +                    && hvm_irq->gsi_assert_count[gsi] )
>> +                send_guest_pirq(v->domain, pirq);
>> +        }
>>          spin_unlock(&v->domain->event_lock);
>>          ret = 0;
>>          break;
> 
> This hunk looks substantially different from my 4.1.1 based code. There is no
> spin_lock acquired. Not sure that could be a reason for the different 
> behaviour,
> too. I'll add that spinlock too.
> 
>     case PHYSDEVOP_eoi: {
>         struct physdev_eoi eoi;
>         ret = -EFAULT;
>         if ( copy_from_guest(&eoi, arg, 1) != 0 )
>             break;
>         ret = -EINVAL;
>         if ( eoi.irq >= v->domain->nr_pirqs )
>             break;
>         if ( v->domain->arch.pirq_eoi_map )
>             evtchn_unmask(v->domain->pirq_to_evtchn[eoi.irq]);
>         if ( !is_hvm_domain(v->domain) ||
>              domain_pirq_to_irq(v->domain, eoi.irq) > 0 )
>             ret = pirq_guest_eoi(v->domain, eoi.irq);
>         else
>             ret = 0;
>         break;
>     }
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

Ok, so I had been modifying that hunk to

        spin_lock(&v->domain->event_lock);
        if ( v->domain->arch.pirq_eoi_map )
            evtchn_unmask(v->domain->pirq_to_evtchn[eoi.irq]);
        if ( !is_hvm_domain(v->domain) ||
             domain_pirq_to_irq(v->domain, eoi.irq) > 0 )
            pirq_guest_eoi(v->domain, eoi.irq);
        if ( is_hvm_domain(v->domain) &&
                domain_pirq_to_emuirq(v->domain, eoi.irq) > 0 )
        {
            struct hvm_irq *hvm_irq = &v->domain->arch.hvm_domain.irq;
            int gsi = domain_pirq_to_emuirq(v->domain, eoi.irq);

            /* if this is a level irq and count > 0, send another
             * notification */
            if ( gsi >= NR_ISAIRQS /* ISA irqs are edge triggered */
                    && hvm_irq->gsi_assert_count[gsi] ) {
                printk("re-send event for gsi%i\n", gsi);
                send_guest_pirq(v->domain, eoi.irq);
           }
        }
        spin_unlock(&v->domain->event_lock);
        ret = 0;

Also I did not completely remove the section that would return the status
without setting needsEOI. I just changed the if condition to be <0 instead of
<=0 (I knew from the tests that the mapping was always 0 and maybe the <0 check
could be useful for something.

        irq_status_query.flags = 0;
        if ( is_hvm_domain(v->domain) &&
             domain_pirq_to_irq(v->domain, irq) < 0 )
        {
            ret = copy_to_guest(arg, &irq_status_query, 1) ? -EFAULT : 0;
            break;
        }

With that a quick test shows both the re-sends done sometimes and the domU doing
EOIs. And there is no stall apparent. Did the same quick test with the e1000
emulated NIC and that still seems ok. Those were not very thorough tests but at
least I would have observed a stall pretty quick otherwise.

-Stefan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.