[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] vhdx support ?



I also googled around and found out that something was cooking at qemu to support vhdx files:

http://wiki.qemu.org/ChangeLog/1.5#Block_devices

  • VHDX (MS Hyper-V) image format has initial read-only support. Dynamic and fixed sized disks are supported, but not differencing images (e.g. VHDX images with a backing file). Read-only is strictly enforced, and the 'readonly=on' option must be used for any VHDX images.

https://bugzilla.redhat.com/show_bug.cgi?id=879234

Maybe I'm wrong but it seems that upstream qemu will be used in a not so far distant future to support Ceph filesystem aswell

http://www.xenserver.org/component/easyblog/entry/tech-preview-of-xenserver-libvirt-ceph.html?Itemid=179

Maybe a dream come true ? :)

Cheers,
Sébastien


On 11.07.2013 05:57, srinivas jonn wrote:
Mike,

this is an existing  VHDX implementation (opensource) for XenServer storage team to consider:

http://discutils.codeplex.com/SourceControl/latest#src/Vhdx/DiskImageFile.cs


"DiscUtils is a .NET library to read and write ISO files and Virtual Machine disk files (VHD, VDI, XVA, VMDK, etc). DiscUtils is developed in C# with no native code (or P/Invoke)"


From: Sébastien Riccio <sr@xxxxxxxxxxxxxxx>
To: Mike McClurg <mike.mcclurg@xxxxxxxxxx>
Cc: "xen-api@xxxxxxxxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxxxxxxxx>
Sent: Monday, 11 June 2012 3:10 PM
Subject: Re: [Xen-API] vhdx support ?

Hi Mike,

Thanks for your reply. Well yes vhdx is very new, it is not yet released
as it's part of the windows 8 server hyper-v layer which is currently 
in beta as far as I know. But still this is very interesting and I am a
bit worried that windows 8's hyper-v is going to take a big step ahead
of other virtualisations solutions.
I love Xen and XCP but I must admit that they've implemented really nice
features...

I don't think there is any vhdx open source implementation yet. I
thought there was a partnership between citrix and microsoft, but maybe
I'm wrong.

Still there is the technical specification document available on ms site:

http://www.microsoft.com/en-us/download/details.aspx?id=29681

If your storage team want to take a look at it.

Cheers,
Sébastien


On 07.06.2012 10:35, Mike McClurg wrote:
> On 01/06/12 23:29, Sébastien Riccio wrote:
>> Hi,
>>
>> I don't know where this question should be posted, but I'll try here.
>>
>> Is there any plan for XenServer/XCP/Kronos to support the vhdx format
>> that should get rid of the 2tb limit for a single volume ?
>>
>> As seen somewhere on the interweb:
>>
>> Now with VHDX Microsoft kills this limitations and brings some other
>> improvements:
>>
>>  * Supports up to 16TB size
>>  * Supports larger block file size
>>  * improved performance
>>  * improved corruption resistance
>
> I just spoke to our storage team dev lead about this. The short answer
> is that we want to support it, but we don't have any plans for it in
> the short term.
>
> The real benefits we would get out of VHDX would be breaking the 2TB
> limit, and potential performance improvements. Modifying our current
> VHD implementation might let us do that, without actually implementing
> VHDX. Perhaps QCOW images might allow disks bigger than 2TB, but I
> don't really know.
>
> The biggest issue with implementing VHDX is that we don't know of any
> existing, open-source implementation of it, which means that we would
> have to invest a lot of time to write our own from scratch. If anyone
> knows of any existing VHDX implementations that we can use, I'm sure
> the storage team would like to hear about it!
>
> Mike
>


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.