[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Issue policing writes from Xen to PV domain memory



>>> On 05.05.14 at 21:27, <aravindp@xxxxxxxxx> wrote:
> It looks like the nested attempts to wait() happens only when the ring is 
> full. The flow is
> mem_event_claim_slot() -> 
>       mem_event_wait_slot() ->
>                wait_event(mem_event_wait_try_grab(med, &rc) != -EBUSY)
> 
> wait_event() macro looks like this:
> do {                                            \
>     if ( mem_event_wait_try_grab(med, &rc) != -EBUSY )                        
>  
>    \
>         break;                                  \
>     for ( ; ; ) {                               \
>         prepare_to_wait(&med->wq);                   \
>         if ( mem_event_wait_try_grab(med, &rc) != -EBUSY )                    
>  
>    \
>             break;                              \
>         wait();                                 \
>     }                                           \
>     finish_wait(&med->wq);                           \
> } while (0)
> 
> In the case where the ring is full, wait() gets called and the cpu gets 
> scheduled away. But since it is in middle of a pagefault, when it runs again 
> it ends up in handle_exception_saved and the same pagefault is tried again. 
> But since finish_wait() never ends up being called wqv->esp never becomes 0 
> and 
> hence the assert fires on the next go around. Am I on the right track?

Looks like so, with the caveat that it's then unclear to me why the
ring would be full in the first place - shouldn't it get drained of earlier
requests quite quickly? But anyway, even if this didn't occur
immediately, but only rarely after many hours of running, it would
still need taking care of.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.