[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 00/20] VM forking
On Tue, Dec 31, 2019 at 3:40 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote: > > On Mon, Dec 30, 2019 at 05:37:38PM -0700, Tamas K Lengyel wrote: > > On Mon, Dec 30, 2019 at 5:20 PM Julien Grall <julien.grall@xxxxxxxxx> wrote: > > > > > > Hi, > > > > > > On Mon, 30 Dec 2019, 20:49 Tamas K Lengyel, <tamas@xxxxxxxxxxxxx> wrote: > > >> > > >> On Mon, Dec 30, 2019 at 11:43 AM Julien Grall <julien@xxxxxxx> wrote: > > >> But keep in mind that the "fork-vm" command even with this update > > >> would still not produce for you a "fully functional" VM on its own. > > >> The user still has to produce a new VM config file, create the new > > >> disk, save the QEMU state, etc. > > IMO the default behavior of the fork command should be to leave the > original VM paused, so that you can continue using the same disk and > network config in the fork and you won't need to pass a new config > file. > > As Julien already said, maybe I wasn't clear in my previous replies: > I'm not asking you to implement all this, it's fine if the > implementation of the fork-vm xl command requires you to pass certain > options, and that the default behavior is not implemented. > > We need an interface that's sane, and that's designed to be easy and > comprehensive to use, not an interface built around what's currently > implemented. OK, so I think that would look like "xl fork-vm <parent_domid>" with additional options for things like name, disk, vlan, or a completely new config, all of which are currently not implemented, + an additional option to not launch QEMU at all, which would be the only one currently working. Also keeping the separate "xl fork-launch-dm" as is. Is that what we are talking about? > > > > > > > If you fork then the configuration should be very similar. Right? > > > > > > So why does the user requires to provide a new config rather than the > > > command to update the existing one? To me, it feels this is a call to > > > make mistake when forking. > > > > > > How is the new config different from the original VM? > > > > The config must be different at least by giving the fork a different > > name. That's the minimum and it's enough only if the VM you are > > forking has no disk at all. > > Adding an option to pass an explicit name for the fork would be handy, > or else xl could come up with a name by itself, like it's done for > migration, ie: <orignal name>--fork<digit>. > > > If it has a disk, you also have to update > > the config to point to where the new disk is. I'm using LVM snapshots > > but you could also use qcow2, or whatever else there is for disk-CoW. > > The fork can also have different options enabled than it's parent. For > > example in our test-case, the forks have altp2m enabled while the > > parent VM doesn't. There could be other options like that someone > > might want to enable for the fork(s). If there is networking involved > > you likely also have to attach the fork to a new VLAN as to avoid > > MAC-address collision on the bridge. So there are quite a lot of > > variation possible, hence its better to have the user generate the new > > config they want instead of xl coming up with something on its own. > > Passing a new config file for the fork is indeed fine, but maybe we > don't want this to be the default behavior, as said above I think it's > possible to fork a VM without passing a new config file. > > > > > > > As a side note, I can't see any patch adding documentation. > > > > It's only an experimental feature so adding documentation was not a > > priority. The documentation is pretty much in the cover letter. I'm > > happy to add its content as a file under docs in a patch (with the > > above extra information). > > Please also document the new xl command(s) in the man page [0]. Ack. Thanks, Tamas _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |