[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Qemu-devel] Qemu disaggregation in Xen environment



On 03/06/2012 02:22 AM, Markus Armbruster wrote:
Anthony Liguori<anthony@xxxxxxxxxxxxx>  writes:

My concern is that this moves the Xen use case pretty far from what
the typical QEMU use case would be (running one emulator per guest).

If it was done in a non-invasive way, maybe it would be acceptable but
at a high level, I don't see how that's possible.

I almost think you would be better off working to build a second front
end (reusing the device model, and nothing else) specifically for Xen.

Almost like qemu-io but instead of using the block layer, use the device model.

What's in it for uses other than Xen?

I figure a qemu-dev could help us move towards a more explicit interface
between devices and the rest.

qemu-io is useful for testing.  Do you think a qemu-dev could become a
useful testing tool as well?

It all depends on how it develops. I think that's the primary advantage of doing a qemu-dev here for Xen. Instead of shoe horning a use-case that really doesn't fit with the model of qemu-system-*, we can look at a new executable that satisfies another use case and potentially use it for other things.

The fundamental characteristic of qemu-system-* is "a guest is a single process". That's the defining characteristic to me and all of the global soup we have deeply ingrains that into our design. Trying to turn it into "half a guest is a single process" is going to be horrific.

OTOH, I can imagine qemu-dev as simply, "this process exposes a set of devices and an RPC interface to interact with them". That will surely improve modularity, could be useful for testing, and possibly will evolve into something that it useful outside of Xen.

Regards,

Anthony Liguori




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.