[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC V8 2/3] libxl domain snapshot API design




>>> On 12/8/2014 at 07:12 PM, in message
<20141208111214.GC17128@xxxxxxxxxxxxxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>
wrote: 
> On Mon, Dec 08, 2014 at 12:34:47AM -0700, Chun Yan Liu wrote: 
> >  
> >  
> > >>> On 12/6/2014 at 12:06 AM, in message 
> > <20141205160615.GA24938@xxxxxxxxxxxxxxxxxxxxx>, Wei Liu 
> > <wei.liu2@xxxxxxxxxx> 
> > wrote:  
> > > I have to admit I'm confused by the back and forth discussion. It's hard  
> > > to justify the design of new API without knowing what the constraints  
> > > and requirements are from your PoV.  
> > >   
> > > Here are my two cents, not about details, but about general constraints.  
> > >   
> > > There are two layers, one is user of libxl (clients -- xl, libvirt etc)  
> > > and libxl (the library itself).  
> > >   
> > > 1. it's better to *not* have storage management in libxl.  
> > >   
> > > It's likely that clients can have their own management functionality  
> > > already.  I'm told that libvirt has that as well as XAPI. Having this  
> > > functionality in libxl is a bit redundant and requires lots of work  
> > > (enlighten libxl on what a disk looks like and call out to various  
> > > utilities).  
> >  
> > Thanks Wei and Ian for your reply. We did have much discussion around 
> > can/cannot (e.g. xl can finish disk snapshot?) and should/shouldnot 
> > (e.g. disk snapshot process should not in xl? domain_snapshot_delete 
> > should not in libxl?), and confusing because have different ideas. So, 
> > settling it down is helpful. 
> >  
> > Talking about libvirt, it does provide storage management but through 
> > storage pools and volumes. But usually, we don't use storage pool/vol 
> > but directly use backend files, then libvirt storage driver can not 
> > manage them. And for libvirt vol, functionality in storage driver is 
> > limited, at least 'snapshot' cannot be done. So, libvirt side also 
> > needs to add codes to handle disk snapshot, quite like xl does. 
> >  
>  
> OK, so I take it that libvirt can be completely out the picture? I mean, 
> it's not a requirement for you to integrate with libvirt? 
>  
> I was thinking a stack like this when I replied: 
>  
>   libvirt: manages snapshot (including storage snapshot) 
>   libxl 
>   [other lower level stuffs] 
>  
> While I read from your reply, libvirt doesn't have that functionality 
> (or very limited), so you would like to do things like: 
>  
>   libvirt (or your homebrew toolstack) 
>   xl      (xl or libxl manages domain snapshots) 
>   libxl 
>   [other lower level stuffs] 
>  
> That's why you spent loads of time discussing with Ian what should be 
> done where, right? 

Partly. At least for domain disk snapshot create/delete, I prefer using
qmp commands instead of calling qemu-img one by one. Using qmp
commands, libvirt will need libxl's help. Of course, if libxl doesn't
supply that, libvirt can call qemu-img to each disk one by one,
not preferred but can do.

>  
> > Following the constraint that it's better NOT to supply disk snapshot 
> > functions in libxl, then we let xl and libvirt do that by themselve 
> > separately, that's OK. 
> >  
> > Then I think NO new API needs to be exported in libxl, since: 
> > * saving/restoring memory, there are already APIs. 
>  
> The principle is that if existing API doesn't work good enough for you 
> we will consider adding a new one. 
>  
> We probably need a new API. If you want to do a live snapshot, we would 
> need to notify xl that we are in the middle of pausing and resuming a 
> domain.  

This is where we discussed a lot. Do we really need
libxl_domain_snapshot_create API? or does xl can do the work?

Even for live snapshot, after calling libxl_domain_suspend with LIVE flags,
memory is saved and domain is paused. xl then can call disk snapshot
functions to finish disk snapshots, after all of that, call libxl_domain_unpause
to unpause the domain. So I don't think xl has any trouble to do that.
In case there is some misunderstanding, please point out.

>  
> > * disk snapshot work is xl internal, can be put in xl (or xlu). 
> > * handle JSON files is xl internal, can be put in xl. 
>  
> Yes. 
>  
> > (these are the main work vm snapshot handles). 
> >  
> > Right? 
> >  
> > This is quite different from previous document, so better to confirm. 
> >  
> > >   
> > > 2. it's *not* a requirement for xl to have the capability to manage  
> > > snapshots.  
> > >   
> > > It's the same arguement that xl has no idea on how to manage snapshots  
> > > created by "xl save".  This should ease your concern on having to  
> > > duplicate code for libvirt and xl.  IMHO the xl only needs to have the  
> > > capability to create a snapshot and create a domain from a snapshot. 
> >  
> > This way it's much easier since we don't need to maintain the snapshot 
> > info in file and don't need to take care of snapshot chain. But I doubt if 
> > that's good? 
> > 1. from user's side, it's a very common request to list all snapshots. 
> > 2. now for kvm, virsh supplies snapshot-create/delete/list/revert, 
> >    Is it good the xl only supply snapshot-create/revert? After all, 
> >    it's more complicated for user to take care of memory saving file 
> >    and disk snapshot info then 'xl save' (user only needs to take 
> >    care of memory state file). 
> >  
>  
> However the current architecture for libvirt to use libxl is like 
>  
>   libvirt 
>   libxl 
>   [other lower level stuffs] 
>  
> So implementing snapshot management in xl cannot work for you either. 
> It's not part of the current architecture.

You are right. I understand you are trying to suggest a way to ease the job.
Here just to make clear this way is really better and finally acceptable? :-)
Just IMO, I think xl snapshot-list is wanted, that means managing snapshots
in xl is needed.

>  
> Not that I'm against the idea of managing domain snapshot in xl, I'm 
> trying to reduce workload here. 
>  
> > > The  
> > > downside is that now xl and libvirt are disconnected, but I think it's  
> > > fine. 
> >  
> > Two things here: 
> > 1. connect xl and libvirt, then will need to manage snapshot info in libxl  
> (or 
> >     libxlu) That's not preferred since the initial design. This is not the  
> point 
> >     we discuss here. 
> > 2. for xl only, list snapshots and delete snapshots, also need to manage 
> >     snapshot info (in xl) 
> >  
> > Considering manage snapshot info in xl, only question is about idl and 
> > gentypes.py, expected structure is as following and expected to be saved 
> > into json file, but it contains xl namespace and libxl namespace things, 
> > gentypes.py will have problem. Better ideas? 
> >  
> > typedef struct xl_domain_snapshot { 
> >     char * name; 
> >     char * description; 
> >     uint64_t creation_time; 
> >     char * memory_path; 
> >     int num_disks; 
> >     libxl_disk_snapshot *disks; 
> >     char *parent; 
> >     bool *current; 
> > } xl_domain_snapshot; 
> >  
>  
> The libxl_disk_snapshot suggests that you still want storage management 
> inside libxl, which should probably be in libxlu? 

Yeah. I may put it in libxlu. Question is the same, cross two name spaces. But
now I think we can turn a way to do it. In xl_domain_snapshot, define
xl_disk_snapshot, then before calling libxlu API, turning it into
libxlu_disk_snapshot.

Thanks for your concern and suggestions. Appreciate it a lot! 
- Chunyan

>  
>   libvirt 
>   libxl libxlu 
>   [other lower level stuffs] 
>  
> Wei. 
>  
> > Thanks a lot! 
> > Chunyan 
> >  
> > > The arguement is that you're not allowed to run two toolstack on  
> > > the same host (think about xl and xend in previous releases).  
> > >   
> > > Do these two constraints make your work easier (or harder)?  
> > >   
> > > Regarding JSON API, as Ian said, feel free to hook it up to libxlu.  
> > >   
> > > Wei.  
> > >   
> > >   
>  
>  


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.