[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Linux Container and Tapdev
在 2012-1-10,0:09,Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> > On Sat, Jan 07, 2012 at 12:17:23PM +0800, MaoXiaoyun wrote: >> >> Hi: >> >> Recently I have some study on linux container(lxc), which is a light >> weight OS level >> virtualization. >> >> With the previous knowledge of tapdisk, I have an assumption that, I may >> could >> use the vhd for each container to seperate the storage of all containers, or >> even in >> later days, vhd is stored in distributed storage, container could be >> migrated. >> >> Build filesystem on tapdev and mount it under some directory, put the >> rootfs of > > How would you "mount" a tapdev filesystem? When a tapdisk starts,there is a tapdev device under /dev/tapdevX, X is sth like a,b,c,... . Mkfs.ext3 on this device and mount it to /mnt/tapdevX, finally bind mount to LXC . > > But more interestingly, how would you migrate your tap disk? Is it on NFS or > OCFS2 > or iSCSI? If so, why not just expand the filesystem on an iSCSI disk and > mount it > on other machines and "migrate it". We have hadoop for the storage, VHD in HDFS is logical, which can,t directly mounted. > >> container into that dir, and start the lxc, I haven't try this, will try >> later. > > I am not really sure how one would freeze and migrate the processes? Does the > cgroups have that functionality? > Close local tapdisk, start it on remote host, reopen the VHD on HDFS. lxc supports supports freeze the processes in container, i,m not sure, wehther live migrate could be implemented in future. Thanks. >> >> I post my idea here, hope someone has one forsee comments. >> >> Thanks. >> > >> _______________________________________________ >> Xen-devel mailing list >> Xen-devel@xxxxxxxxxxxxxxxxxxx >> http://lists.xensource.com/xen-devel > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |