[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] [Xen-devel] GSoC 2012 project brainstorming



Ian Campbell wrote:
> On Wed, 2012-03-14 at 17:55 +0000, Dave Scott wrote:
> > I've been busy recently moving all the domain management parts of
> xapi
> > into a separate daemon, which later can be libxl'ed up. For testing
> > (and for fun) I've made it understand some of xm/xl config file
> syntax eg
> 
> Interesting. Pure ocaml I presume?

Yep.

> > [root@st20 ~]# xn export-metadata-xm win7 win7.xm
> 
> xn? We're going to run out of letters soon ;-)

:-)

> Do you handle import as well as export? One of the more interesting use
> cases (I think) is handling folks who want to migrate from an xm/xl
> based setup to a xapi setup (e.g. by installing Kronos on their
> existing
> Debian system). That's was the primary aim of the proposed project.

IIRC it can import simple things, but it's quite incomplete. If the
goal is to migrate from an xm/xl setup to xapi then it probably makes
more sense to use the existing (and by definition correct) xl config
parser and then talk the XenAPI directly. Or emit a xapi "metadata export".

One of the interesting areas will be storage...

> > [root@st20 ~]# cat win7.xm
> > name='win7'
> > builder='hvmloader'
> > boot='dc'
> > vcpus=1
> > memory=2048
> > disk=[ 'sm:7af570d8-f8c5-4103-ac1d-969fe28bfc11,hda,w', 'sm:137c8a61-
> 113c-ab46-20fa-5c0574eaff77,hdb:cdrom,r' ]
> 
> Half-assed wondering -- I wonder if sm: (or script=sm or similar)
> support could work in xl...

Yeah I've been wondering that too. As well as tidying up the domain
handling code in xapi I've also been trying to generate docs for the
xapi <-> storage interface (aka "SMAPI"). The current version is here:

http://dave.recoil.org/xen/storage.html

I'd also like to make the current storage plugins run standalone (currently
they require xapi to be running). If we did that then we could potentially
add support for XCP storage types directly into libxl (or the hotplug scripts).

As well as generate docs for the SMAPI I can also generate python skeleton
code, to make it easier to write storage plugins. A custom one of these
might make it easier to migrate from xm/xl to xapi too, by leaving the
disks where they are, rather than moving them into a regular SR.

> 
> > vif=[  ]
> > pci=[  ]
> > pci_msitranslate=1
> > pci_power_mgmt=0
> > # transient=true
> >
> > Another goal of the refactoring is to allow xapi to co-exist with
> domains
> > created by someone else (e.g. xl/libxl). This should allow a
> migration to
> > be done piecemeal, one VM at a time on the same host.
> 
> The brainstoming list below includes "make xapi use libxl". Is this (or
> a subset of this) the sort of thing which could be done by a GSoC
> student?

I think a subset could probably be done in the GSoC timeframe. Before the
summer starts I should have merged my refactoring into the xapi mainline.
A student could then fire up the ocaml libxl bindings (they might need a bit
of tweaking here or there) and then start patching things through. All the
critical libxc code is now called (indirectly) via a single module:

https://github.com/djs55/xen-api/blob/cooper/ocaml/xenops/xenops_server_xen.ml

If that were made to use libxl then the job would be basically done. All that
would be left would be little things like statistics gathering.


> 
> I suppose it is only fair that I offer to be co-/backup-mentor to a
> main
> mentor from the xapi side of things for such a project...

Great :-)

Cheers,
Dave

> 
> Ian.
> 
> 
> >
> > Cheers,
> > Dave
> >
> > >
> > > Ian.
> > >
> > > > TOOLS
> > > > -----
> > > > - pv grub2
> > > > - xapi support in libvirt
> > > > - make xapi use libxl
> > > > - compile xapi on ARM
> > > > - OpenXenManager
> > > > - driver domains
> > > > - PV dbus
> > > > - HA daemon for Remus
> > > > - HA daemon for XCP
> > > >
> > > > PERF
> > > > ----
> > > > - Oprofile
> > > > - Linux perf tools in guest
> > > >
> > > > HYPERVISOR
> > > > ----------
> > > > - insmod Xen
> > > > - event channel limits
> > > > - NUMA
> > > >
> > > > MEMORY
> > > > ------
> > > > - disk based memory sharing
> > > > - memory scanner
> > > > - paging replacing PoD
> > > > - VM Fork
> > > > - Copy on read (past VDI boot)
> > > >
> > > >
> > > > IO
> > > > --
> > > > - PV OpenGL/Gallium
> > > > - PV USB
> > > > - PV USB3
> > > > - PV SCSI
> > > > - PVFB in Xorg
> > > > - Infiniband
> > > >
> > > >
> > > > STORAGE
> > > > -------
> > > > - gluster/ceph plugins
> > > >
> > > >
> > > > QEMU
> > > > ----
> > > > - BSD libc
> > > > - upstream QEMU stubdoms
> > > >
> > > >
> > > > TESTS
> > > > -----
> > > > - better web reporting
> > > > - more tests
> > > > - upstream linux
> > > >
> > > >
> > > > DISTROS
> > > > -------
> > > > - packaging stubdoms
> > > > - xen on centos6
> > > > - driver domains
> > > > - figure out the VM format issue
> > > > - XSM in distros
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > Xen-devel mailing list
> > > > Xen-devel@xxxxxxxxxxxxx
> > > > http://lists.xen.org/xen-devel
> > >
> > >
> > >
> > > _______________________________________________
> > > xen-api mailing list
> > > xen-api@xxxxxxxxxxxxx
> > > http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api
> 


_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.