[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Qemu-devel] Qemu disaggregation in Xen environment

Anthony Liguori <anthony@xxxxxxxxxxxxx> writes:

> On 03/05/2012 04:53 PM, Stefano Stabellini wrote:
>> On Mon, 5 Mar 2012, Anthony Liguori wrote:
>>> On 02/28/2012 05:46 AM, Julien Grall wrote:
>>>> Hello,
>>>> In the current model, only one instance of qemu is running for each 
>>>> running HVM
>>>> domain.
>>>> We are looking at disaggregating qemu to have, for example, an instance to
>>>> emulate only
>>>> network controllers, another to emulate block devices, etc...
>>> Why would you want to do this?
>> We are trying to disaggregate QEMU, the same way we do with Linux.
>> On Xen we can run a Linux guest to drive the network card, another Linux
>> guest to drive the SATA controller, another one for the management
>> stack, etc. This helps both with scalability and isolation.
>> In this scenario is only natural that we run a QEMU that only emulates
>> a SATA controller in the storage domain, a QEMU that only emulates the
>> network card in the network domain and everything else in a stubdom.
>> What's better than using QEMU as emulator? Using three QEMUs per guest
>> as emulators! :-)
> My concern is that this moves the Xen use case pretty far from what
> the typical QEMU use case would be (running one emulator per guest).
> If it was done in a non-invasive way, maybe it would be acceptable but
> at a high level, I don't see how that's possible.
> I almost think you would be better off working to build a second front
> end (reusing the device model, and nothing else) specifically for Xen.
> Almost like qemu-io but instead of using the block layer, use the device 
> model.

What's in it for uses other than Xen?

I figure a qemu-dev could help us move towards a more explicit interface
between devices and the rest.

qemu-io is useful for testing.  Do you think a qemu-dev could become a
useful testing tool as well?

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.