[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 7/7] libxl: Wait for QEMU startup in stubdomain

On Mon, Feb 9, 2015 at 7:08 AM, Anthony PERARD
<anthony.perard@xxxxxxxxxx> wrote:
> On Mon, Feb 09, 2015 at 09:07:13AM +0000, Ian Campbell wrote:
>> On Fri, 2015-02-06 at 17:23 +0000, Stefano Stabellini wrote:
>> > >
>> > > One of the main issues outstanding from when Anthony originally posted
>> > > his patches is how we want to go about building this?  I honestly do
>> > > not know how well the current dracut-based approach to building the
>> > > root image will work across various Linux distributions; perhaps it
>> > > will be OK, since all of the libraries that dracut will siphon in will
>> > > have to be in place to meet the build requirements for QEMU to begin
>> > > with.
>> > >
>> > > However, I have zero knowledge about ARM-based Xen or where
>> > > NetBSD is used for dom0, and how they might affect the decisionmaking.
>> > > I also do not know what lessons have been learned from building other
>> > > stubdoms, rumpkernel, or Mirage that might inform these decisions.
>> > > In other words, what do you see as a sensible build scheme?  The
>> > > approach taken in the patches strikes me as too hacky for release
>> > > quality, but maybe it is OK...
>> >
>> > It looks OK to me but I am not an expert in this kind of things. I'll
>> > let Ian Campbell and Ian Jackson (CC'ed) comment on it.
>> I'm not at all keen on things like the use of dracut (or mkinitramfs or
>> similar) in Xen's build since they are inherently/inevitably specific to
>> the Linux distro they came from, so they often don't work (or aren't
>> even available on) other Linux distros and are an even bigger problem
>> for *BSD.

OK.  Maybe pulling *BSD into this is too much at this time, because
the process of building Linux, QEMU (for Linux), and all of the
dependant libraries blows up into a much larger issue once we are no
longer on Linux.  Someone once suggested using Yocto; I don't know if
this works under *BSD.  Instead, we might have to use a Linux-based VM
to build a Linux-based stubdom - but then, what do we use for the
Linux-based VM?

I propose we target the following for tech preview: if you are
building Xen on Linux x86_64, stubdom-linux (i.e., its Linux kernel
and the rootfs containing QEMU and the libraries it requires) _can_ be
automatically built as part of the Xen build (but perhaps not by
default).  Under these conditions, we should have everything we need
at hand - if you are on Linux and can compile QEMU-upstream for dom0,
I see no reason why you cannot also assemble a Linux-based
QEMU-upstream stubdom.

> I'd like to precise that the use of dracut here is only to copy a binary
> and it's dependencies (shared libraries), the binary used is called
> dracut-installer. I guest that the same can be achieved by the copy_exec()
> function from mkinitramfs (found in
> /usr/share/initramfs-tools/hook-functions on a debian system).
> The rest is done by a script, which choose which binary and file to include
> in the rootfs, where to put them and how to generate the image.

To make this discussion a little more concrete, I have put a listing
of the contents of the rootfs that is currently being built at the end
of this email. Maybe it will help to see there really is not all that
much in there.  For the most part, the difficult part of putting
together the rootfs, in terms of an automatic build, is determining
which libraries go into /lib64.

Anthony's solution to this was to use dracut, and it does this very
well.  To not use dracut (or possibly some equivalent function of
mkinitramfs) would just mean we would end up replicating its
functionality (or at least the subset of its functionality we rely

I see a couple of relatively simple paths to pursue:
(1) Do what is being done now: download dracut (from the Xen git repo
or elsewhere), compile it, and use it to copy over the libraries we
need for QEMU.  Is there some reason that dracut does not work across
Linux distros?  We could skip downloading and compiling on a system
that already has dracut installed.
(2) Have a configure script dependency requiring _either_ dracut or
mkinitramfs.  We use whichever one is available.

>> I've yet to see a solution for this which seemed satisfactory enough to
>> be included in Xen's system. Mirage, minios stubdoms, rumpkernels etc
>> all avoid this by not having a need for a rootfs.

OK, so these may not provide us with helpful prototypes.  If the
stubdom is based on Linux, we _have_ to have a rootfs, whether served
up via blkback (current solution) or rolled into an initramfs and
tucked into the kernel (which was tried in the past, but increased the
memory overhead for the stubdom).  So, how do we build the rootfs?

>> But, it's not necessarily the case that the Xen project has to produce
>> the Linux stubdom binary as part of its build output. IMHO it would be
>> sufficient (for tech preview at least) if the tools would detect and use
>> a stubdom binary if it were present at some well defined path so long as
>> the for the runes to build the image were documented somewhere (e.g.in
>> the wiki, in docs/misc, in some script, etc). Then the problems of them
>> being distro/kernel specific are somewhat mitigated (e.g. a Fedora fan
>> can write dracut instructions, a Debian fan can write mkinitramfs ones
>> and a BSD fan can make it work with a BSD kernel etc etc) and we avoid
>> having to have rootfs construction code in Xen's build.

This is just pushing the problem down the line.  I don't see why we
cannot come up with something that is portable across Linux distros.
Right now, at least on Linux, building the Linux-based stubdom and
rootfs is pretty clean and quick - especially if you compare it to all
of the work required to roll up ioemu-stubdom.gz.

- Eric

Current contents of rootfs for Linux-based stubdom:

/bin/mount -> busybox
/lib64/ld-linux-x86-64.so.2 -> ld-2.19.so
/lib64/libaio.so.1 -> libaio.so.1.0.1
/lib64/libbz2.so.1 -> libbz2.so.1.0.6
/lib64/libcrypto.so -> libcrypto.so.1.0.0
/lib64/libcurl.so -> libcurl.so.4.3.0
/lib64/libcurl.so.4 -> libcurl.so.4.3.0
/lib64/libdl.so.2 -> libdl-2.19.so
/lib64/libgcc_s.so -> libgcc_s.so.1
/lib64/libglib-2.0.so -> libglib-2.0.so.0.4000.0
/lib64/libglib-2.0.so.0 -> libglib-2.0.so.0.4000.0
/lib64/libgthread-2.0.so -> libgthread-2.0.so.0.4000.0
/lib64/libgthread-2.0.so.0 -> libgthread-2.0.so.0.4000.0
/lib64/liblber-2.4.so.2 -> liblber-2.4.so.2.10.1
/lib64/libldap-2.4.so.2 -> libldap-2.4.so.2.10.1
/lib64/liblzma.so.5 -> liblzma.so.5.0.5
/lib64/liblzo2.so -> liblzo2.so.2.0.0
/lib64/liblzo2.so.2 -> liblzo2.so.2.0.0
/lib64/libm.so.6 -> libm-2.19.so
/lib64/libpixman-1.so -> libpixman-1.so.0.32.4
/lib64/libpixman-1.so.0 -> libpixman-1.so.0.32.4
/lib64/libpthread.so.0 -> libpthread-2.19.so
/lib64/libresolv.so.2 -> libresolv-2.19.so
/lib64/librtmp.so -> librtmp.so.1
/lib64/librt.so.1 -> librt-2.19.so
/lib64/libssl.so -> libssl.so.1.0.0
/lib64/libstdc++.so -> libstdc++.so.6.0.17
/lib64/libstdc++.so.6 -> libstdc++.so.6.0.17
/lib64/libutil.so.1 -> libutil-2.19.so
/lib64/libuuid.so.1 -> libuuid.so.1.3.0
/lib64/libxenctrl.so -> libxenctrl.so.4.5.0
/lib64/libxenctrl.so.4.5 -> libxenctrl.so.4.5.0
/lib64/libxenguest.so -> libxenguest.so.4.5.0
/lib64/libxenguest.so.4.5 -> libxenguest.so.4.5.0
/lib64/libxenstore.so -> libxenstore.so.3.0.3
/lib64/libxenstore.so.3.0 -> libxenstore.so.3.0.3
/lib64/libz.so.1 -> libz.so.1.2.8
/usr/lib -> /lib

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.