[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC V9 4/4] domain snapshot design: libxl/libxlu

>>> On 1/8/2015 at 08:11 PM, in message <1420719107.19787.53.camel@xxxxxxxxxx>, 
>>> Ian
Campbell <Ian.Campbell@xxxxxxxxxx> wrote: 
> On Mon, 2014-12-22 at 02:36 -0700, Chun Yan Liu wrote: 
> > > > b). For internal snapshot, like qcow2, lvm too. For lvm, it doesn't  
> support  
> > > > snapshot of snapshot, so out of scope. For qcow2, delete any disk  
> snapshot  
> > > > won't affect others.  
> > >   
> > > For either internal or external if you are removing a snapshot from the  
> > > middle of a chain which ends in one or more active disks, then surely  
> > > the disk backend associated with those domains need to get some sort of  
> > > notification, otherwise they would need to be written *very* carefully  
> > > in order to be able to cope with disk metadata changing under their  
> > > feet.  
> > >   
> > > Are you saying that the qemu/qcow implementation has indeed been written  
> > > with this in mind and can cope with arbitrary other processes modifying  
> > > the qcow metadata under their feet?  
> >  
> > Yes. 
> >  
> > I add qemu-devel Kevin and Stefan in this thread in case my understanding 
> > has somewhere wrong. 
> >  
> > Kevin & Stefan, 
> >  
> > About the qcow2 snapshot implementation,  in following snapshot chain case, 
> > if we delete SNAPSHOT A, will it affect domain 1 and domain 2 which uses 
> >  
> > From my understanding, creating a snapshot will increases refcount of  
> original data, 
> > deleting a snapshot only descreases the refcount (won't delete data until  
> the refcount 
> > becomes 0), so I think it won't affect domain 1 and domain 2. 
> > Is that right? 
> I'm not worried about the data being deleted (I'm sure qcow2 will get 
> that right), but rather about the snapshot chain being collapsed and 
> therefore the metadata (e.g. the tables of which block is in which 
> backing file, and perhaps the location of the data itself) changing 
> while a domain is running, e.g. 
> BASE---SNAPSHOT A---SNAPSHOT B --- domain 1  
>                  `--SNAPSHOT C --- domain 2  
> becoming  
> BASE----------------SNAPSHOT B --- domain 1  
>                  `--SNAPSHOT C --- domain 2  
> (essentially changing B and C's tables to accommodate the lack of A) 
> For an internal snapshot I can see that it would be sensible (and easy) 
> to keep A around as a "ghost", to avoid this case, and the need to 
> perhaps move data around or duplicate it. 

This is what we talked about a lot :-)

If I understand correctly, qcow2 internal snapshot implementation
should be like this:
Taking a snapshot is increasing a new table which records the data
clusters index in this snapshot, at the same time increase the refcount
of those data clusters;
Teleting a snapshot only decrease the refcount (not deleting the data,
only when refcount=0 the data will be deleted.) and remove the map table.

So, I think deleting a snapshot even within a snapshot chain, won't
affect other snapshots. ( Otherwise, even if that's not the case, how can
we move internal data around or duplicate it? )

> If A were external though an admin might think they could delete the 
> file...

About external, generally, external disk snapshot are made by backing file,
qemu, vhd-util and lvm are all done that way. Like:
    snapshot A -> snapshot B -> Current image
A is B's backing file, B is current image's backing file, as backing file, it
should not be changed, otherwise if A is changed or deleted, B is corrupted.
So, a delete operation is actually a merge process, like 'virsh blockcommit':
shorten disk image chain by live merging the current active disk content.

To external disk snapshot: revert (may need to clone the snapshot out)
and delete (merge in fact as mentioned above). The complexity is
internal/external mixing case (take internal snapshot, revert, take
external snapshot, ...)


> Ian. 

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.