[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] rump kernels running on the Xen hypervisor


  • To: xen-api@xxxxxxxxxxxxx
  • From: George Shuklin <george.shuklin@xxxxxxxxx>
  • Date: Tue, 20 Aug 2013 05:00:33 +0400
  • Delivery-date: Tue, 20 Aug 2013 01:00:48 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>

On 16.08.2013 03:58, Antti Kantee wrote:
Hi all,

I have written initial support for running rump kernels directly on top the Xen hypervisor. Rump kernels essentially consist of unmodified kernel drivers, but without the added baggage associated with full operating systems such as a scheduling policy, VM, multiprocess support, and so forth. In essence, the work enables running minimal single-image application domains on top of Xen while relying on real-world proven kernel-quality drivers, including file systems, TCP/IP, SoftRAID, disk encryption, etc. Rump kernels provide a subset of the POSIX'y application interfaces relevant for e.g. file systems and networking, so adapting existing applications is possible.

I have pushed the implementation to github. If you wish the try the few demos I put together, follow the instructions in the README. Please report any and all bugs via the github interface. Testing so far was light, but given that I wrote less than 1k lines of code including comments and whitespace, I hope I haven't managed to cram too many bugs in there. I've done my testing on a x86_32 Dom0 with Xen 4.2.2.

        https://github.com/anttikantee/rumpuser-xen/

I'll explain the implementation in a bit more detail. Rump kernels are made possible the anykernel architecture of NetBSD. Rump kernels run on top of the rump kernel hypercall layer, so the implementation was a matter of writing an implementation of the hypercalls for the Xen hypervisor. I started looking at the Xen Mini-OS to figure out how to bootstrap a domU, and quickly realized that Mini-OS implements almost everything the rump kernel hypercall layer requires: a build infra, cooperative thread scheduling, physical memory allocation, simple interfaces to I/O devices such as block/net, and so forth. As a result, the implementation is more or less plugged on top of Mini-OS, and contains a lot of code unnecessary for rump kernels. I'm unsure if I should fully fork Mini-OS or attempt to merge some of my changes back. For example, it seems like the host namespace leaks into Mini-OS (i.e. -nostdinc isn't used), and it would be nice to get that fixed. If anyone has any smart ideas about which direction to go in, please advise.

I thank the people who have suggested this project over the years. I believe the first one to suggest it on a public list was Jean-Yves Migeon quite some years ago, so explicit thanks go to him and "you know who you are" thanks go to others. I also thank Juan RP of Void Linux, who re-added support to Void for Xen 4.2 about 5 minutes after I told him I'd like to do some Xen testing with a x86_32 Dom0 (moving to 64bit systems is on my TODO list ;)


The more I see work around "split dom0 to many domains" the more I think it unexpected response to Linus VS Tannenbaum discussion. Linus creates monolithic kernel and Xen is now getting more and more like thin and slim microkernel to manage memory, device access and process execution. Everything else (including actual drivers, filesystems, rich kernel services like nfs/iscsi) is separate services, interacting via isolating microkernel.

I think something should happens around xenstore, because now it is the critical part of whole stack and it is still dom0 application which easily get hit by all 'rich kernel' stuff - oom, problem with cpu steal, some IO issues, etc.

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.