[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] RFC: Configuring rumprun-xen application stacks from Xenstore
> > Following is the high-level description from the Git commit: > > Can you also post an example of the usage of your CLI tool? Actually, > can you post a rough description of the entire process that a user would > have to follow, i.e. compile, configure, run. Running "xr" with no parameters gives a nice command reference :-) Anyhow: Simple hello world using tests/hello/hello built as part of buildxen.sh: # xr run -i tests/hello/hello Running a webserver: Get mathopd source from http://mathopd.org/. I used mathopd as it's BSD licensed, non-forking and small. To build and run it: 1. git clone, buildxen.sh 2. Put app-tools at the *END* of your $PATH. There is a bug/interaction with the app-tools ld that breaks things if you put it before the real ld. 3. (in mathopd/src) rumpapp-xen-make 4. You need filesystem images for a stub /etc and /data. I am using cd9660 for these as you can portably generate that anywhere you have genisoimage (ex mkisofs). (see below) 5. Assuming you have those, run the following in the mathopd src directory, as root: # xr run -i -n inet:dhcp -b etc.iso:/etc -b data.iso:/data mathopd -nt -f /etc/mathopd.conf This will configure xenif0 using dhcp/ipv4, mount /etc and /data using the respective images and then run "mathopd -nt -f /etc/mathopd.conf". The mathopd options tell it not to fork into the background and to write logs to standard output. That's all :-) Next step for me is to add a copy of mathopd as a demo into rumprun-xen. As part of that demo I will add scripts to rebuild the filesystem images, but probably not do that in the default build so as not to introduce a hard dependency on genisoimage. I then plan to experiment getting a full LAMP stack or similar running across multiple VMs but I need to stabilise the simple case with mathopd serving a static website first, there are still several bugs lurking there. > > - The rumpconfig module provides _rumprun_config() and > > _rumprun_deconfig() functions. These are called before and after the > > application main() function, and also in the case of deconfig when > > _exit() is called. > > Is deconfig necessary? The rump kernel already automatically e.g. > unmounts file systems and releases the dhcp lease when it's halted. It does unmount filesystems (if halted correctly) but afaict it does not do rump_pub_etfs_remove() and the dhcp stuff does not destroy the interface. This is nitpicking, but if you don't do that then the underlying blkfront/netfront does not get "correctly" detached either as you can see from "port X still bound!" messages during minios_stop_kernel(). > > Secondly, it is my intention with this work to provide a > > "docker-alike" interface for running rumprun applications. The "xr" > > script is therefore the CLI for running such applications. > > The user-facing configuration tool was sorely needed. I hate to go into > naming, but ... can we call the tool "rumprun"? I think your tool will > be the basis for running rumprun stacks beyond Xen, and we should try to > avoid the user-visible syntax having any obvious shortcomings. Guess so. In my mind there is potentially more the tool can do than just run rumprun stacks, for example: - manage interaction with the host networking, map host ports to domain:port - generate or otherwise manage filesystem images (eg we could have a custom DNS server) - manage stack naming on the host, this is a bit daft at the moment, eg. if you try to run two copies of mathopd it will fall over due to the Xen domain name not being unique And so on. Maybe this can be layered into separate tools with the 'rumprun' script dealing only with launching. Needs more thought. > > Note that in this initial version, only configuring IPv4 network > > interfaces with DHCP is supported, and only using image files with ffs > > or cd9660 filesystems for block devices is supported. > > Would e.g. IPv6 support take longer than it took to write that paragraph ?-) <taptaptap> ;) Martin _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |