[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] [xs-devel] lvmthin



Thanks much, Dave and Pasi, for your feedback. It certainly makes sense to proceed in that order, viz. dealing with thin provisioning of VHD before pursuing a pool-wide lvmthin implementation. Thanks very much for taking this into consideration!
-=Tobias

On 1/12/2015 2:57 AM, Dave Scott wrote:
On 12 Jan 2015, at 09:29, Pasi KÃrkkÃinen <pasik@xxxxxx> wrote:

On Fri, Jan 09, 2015 at 03:59:26PM -0700, Tobias Kreidl wrote:
   So sorry, I meant of course to write that the utility is lvmthin (not
   thinlvm).

Hello,

I *think* the lvmthin stuff is currently designed for single-host systems,
where the storage is used by a single dedicated host only. That's the usual 
model for LVM aswell.
Thatâs right.

If multiple hosts access the same LVM volumes one needs to make sure the LVM 
metadata changes are properly synchronized across all hosts,
so all the hosts sharing the same physical LUNs/PVs/VGs/LVs have exactly the 
same idea of the LVM settings, and I believe XAPI does this for normal (thick) 
LVM volumes.

But it gets much more difficult with thin volumes, because the "metadata" can 
change on *every* write IO (when one needs to allocate more blocks from the thin pool?).

Note I haven't looked at the internals of LVM thinprovisioning, but I assume 
it'll be more difficult to get working in shared-LUN/multi-host environments 
compared to normal thick-LVM.

I guess there would have to be some "free-blocks-per-host-for-thin-volumes", so 
each host would allocate new blocks from its dedicated pool, without the risk of 
corrupting the LVM PVs or the volumes.. and avoiding the need of syncing metadata on 
every write IO?
LVMâs thin provisioning uses the device mapper âdm-thinâ target, which works as you 
describe. I think for local, non-shared LVM it looks really good. Iâve got a very experimental 
storage plugin which can use it (ezlvm[1]). Also I think Fedora 21 can boot from thin-provisioned 
LVM which would let you do things like snapshot (and revert) the whole dom0 filesystem.

Iâve thought a little about how we could use it in future for shared LVM. Potentially we could make the âthin 
poolâ work for per-host allocation by a ballooning-like technique. If every host saw the full set of free blocks, 
but most of the blocks were masked off by a fake disk (like the balloon driverâs balloon) then each host would be 
able to allocate locally from the same block address space. This would make it easier to move volumes between hosts. I 
think each host would need its own private copy of the LVM metadata, which would function like the journal in this 
âthinlvhdâ design. When I tried to make this work a while ago I hit a problem when I tried to modify the 
âballoon diskâ: when I reloaded the thin pool metadata and resumed the device mapper device it had clearly 
cached some of the data structure in memory because it immediately corrupted itself. This is probably fixable but 
requires more experimentation :)

So for the shared case I think we should work on âthinlvhdâ (i.e. thin 
provisioning via tapdisk) first but play with dm-thin in the background :)

Cheers,
Dave

[1] https://github.com/xapi-project/ezlvm



Just random thoughts :)

-- Pasi

   On 1/9/2015 3:54 PM, Tobias Kreidl wrote:

     As of around CentOS 6.5 (and for sure in RHEL 6.4) that there is a
     thinlvm lvmthin utility that seems to take care of supporting
     thinly-provisioned LVM volumes. There is an associated snapshot design
     based on LVM thin provisioning that even supports the ability to do
     snapshots of snapshots, etc. down the chain.

     Unfortunately, from a cursory look, it doesn't look like a back port to
     CentOS 5 would be that easy or even possible, but that it is integrated
     into both CentOS 6 and CentOS 7 gives some hope for a possibly
     standardized support of it in a future XenServer release, doesn't it?
     The other big question would be how readily something like this could be
     integrated into XenServer.

     -=Tobias




_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.