[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] rump kernels running on the Xen hypervisor



On Fri, 2013-08-16 at 02:58 +0300, Antti Kantee wrote:
> Hi all,
> 
> I have written initial support for running rump kernels directly on top 
> the Xen hypervisor.  Rump kernels essentially consist of unmodified 
> kernel drivers, but without the added baggage associated with full 
> operating systems such as a scheduling policy, VM, multiprocess support, 
> and so forth.  In essence, the work enables running minimal single-image 
> application domains on top of Xen while relying on real-world proven 
> kernel-quality drivers, including file systems, TCP/IP, SoftRAID, disk 
> encryption, etc.  Rump kernels provide a subset of the POSIX'y 
> application interfaces relevant for e.g. file systems and networking, so 
> adapting existing applications is possible.

Sounds really cool! What sort of applications have you tried this with?
Does this provide enough of a posix like system (libc etc) to run
"normal" applications or do applications need to be written explicitly
for rump use?

What I'm wondering is if this would be a more maintainable way to
implement the qemu stub domains or the xenstore one?

> I have pushed the implementation to github.  If you wish the try the few 
> demos I put together, follow the instructions in the README.  Please 
> report any and all bugs via the github interface.  Testing so far was 
> light, but given that I wrote less than 1k lines of code including 
> comments and whitespace, I hope I haven't managed to cram too many bugs 
> in there.

I think the usual statistic is that 1000 line should only have a few
dozen bugs ;-D

>  I've done my testing on a x86_32 Dom0 with Xen 4.2.2.
> 
>          https://github.com/anttikantee/rumpuser-xen/
> 
> I'll explain the implementation in a bit more detail.  Rump kernels are 
> made possible the anykernel architecture of NetBSD.  Rump kernels run on 
> top of the rump kernel hypercall layer, so the implementation was a 
> matter of writing an implementation of the hypercalls for the Xen 
> hypervisor.  I started looking at the Xen Mini-OS to figure out how to 
> bootstrap a domU, and quickly realized that Mini-OS implements almost 
> everything the rump kernel hypercall layer requires: a build infra, 
> cooperative thread scheduling, physical memory allocation, simple 
> interfaces to I/O devices such as block/net, and so forth.  As a result, 
> the implementation is more or less plugged on top of Mini-OS, and 
> contains a lot of code unnecessary for rump kernels.  I'm unsure if I 
> should fully fork Mini-OS or attempt to merge some of my changes back. 
> For example, it seems like the host namespace leaks into Mini-OS (i.e. 
> -nostdinc isn't used), and it would be nice to get that fixed.  If 
> anyone has any smart ideas about which direction to go in, please advise.

Ian Jackson has also been looking at these sorts of issues with mini-os
recently, I'll let him comment on where he has gotten to etc.

I think as a general rule we'd be happy with any cleanups made to
mini-os.

Ian.


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.