[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] bug in hvmloader xenbus_shutdown logic



On 05/12/13 02:08, Liuqiming (John) wrote:
According to oxenstored source code, oxenstored will read every domain's IO 
ring no matter what events happened.

Here is the main loop of oxenstored:

let main_loop () =
...
        process_domains store cons domains   <- no matter what event income, 
this will handle IO ring request of all domains
        in

so when one domain's hvmloader is clearing its IO ring, oxenstored may access 
this IO ring because of another domain's event happened.

Yes, this version of the code checks all the interdomain rings every time it goes around the main loop. FWIW my experimental updated oxenstore uses one (user-level) thread per connection. I had thought this was just an efficiency issue... I didn't realise it had correctness implications.

On Wed, 2013-12-04 at 04:25 +0000, Liuqiming (John) wrote:

memset can not set all the page to zero in an atomic way, and during
the clear up process oxenstored may access this ring.

Why is oxenstored poking at the ring? Surely it should only do so when
the guest (hvmloader) sends it a request. If hvmloader is clearing the
page while there is a request/event outstanding then this is an
hvmloader bug.

Ok, that makes sense.

My only hesitation is that violations of this rule (like with current oxenstored) show up rarely. Presumably the xenstore ring desynchronises and the guest will never be able to boot properly with PV drivers. I don't think I've ever seen this myself.

Do frontends expect the xenstore ring to be in a zeroed state when they startup? Assuming hvmloader has received all the outstanding request/events, would it be safe to leave the ring contents alone?

Cheers,
Dave



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.