[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Issue policing writes from Xen to PV domain memory

>Aravindh, I thought of a solution to your problem. It seems the ring collapses
>under recursive identical faults. If they are all the same, you can merge them
>up. It's relatively simple and robust logic to compare against the last un-
>consumed element in the ring
>1. read last unconsumed
>2. compare
>3. different? write into ring
>4. same? atomic_read consumer index to ensure still unconsumed 5.
>consumed? write into ring <still a possible race below, but not a tragedy> 6.
>not consumed? drop…
>Kind of like what remote display protocols do quite aggressively. More
>aggressive batching and reordering is hard with a consumer as decoupled as in
>the Xen shared rings case. I'm pretty hopeful this will solve this one problem.

This sounds very promising. I will give it a shot. Thanks so much for the idea.

>But am not sure how to solve the very general problem of aggressive
>overflow in a scheduling context that does not allow exiting to schedule() and
>thus wait().

Yes, this is the larger problem. I am hoping Jan or Tim will have an idea about 
what to do as we are entering areas in Xen that I am not very familiar with.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.