[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] block: vpc support for ~2 TB disks



We don't use qemu's VHD driver in XenServer. Instead, we use blktap2 to create 
a block device in dom0 serving the VHD file in question, and have qemu open 
that block device instead of the VHD file itself.

> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@xxxxxxxxxxxxx]
> Sent: 13 November 2012 10:56
> To: Paolo Bonzini
> Cc: Charles Arnold; qemu-devel@xxxxxxxxxx; Kevin Wolf;
> stefanha@xxxxxxxxxx; Stefano Stabellini; xen-devel@xxxxxxxxxxxxxxxxxxx;
> Thanos Makatos
> Subject: Re: [PATCH] block: vpc support for ~2 TB disks
> 
> On Tue, 13 Nov 2012, Paolo Bonzini wrote:
> > Il 12/11/2012 20:12, Charles Arnold ha scritto:
> > > Ping?
> > >
> > > Any thoughts on whether this is acceptable?
> >
> > I would like to know what is done by other platforms.  Stefano, any
> > idea about XenServer?
> >
> 
> I am not sure, but maybe Thanos, that is working on blktap3, knows, or
> at least knows who to CC.
> 
> 
> 
> > > - Charles
> > >
> > >>>> On 10/30/2012 at 08:59 PM, in message
> > >>>> <50A0E561.5B74.0091.0@xxxxxxxx>, Charles
> > > Arnold wrote:
> > >> The VHD specification allows for up to a 2 TB disk size. The
> > >> current implementation in qemu emulates EIDE and ATA-2 hardware
> > >> which only allows for up to 127 GB.  This disk size limitation can
> > >> be overridden by allowing up to 255 heads instead of the normal 4
> > >> bit limitation of 16.  Doing so allows disk images to be created
> of
> > >> up to nearly 2 TB.  This change does not violate the VHD format
> > >> specification nor does it change how smaller disks (ie, <=127GB)
> are defined.
> > >>
> > >> Signed-off-by: Charles Arnold <carnold@xxxxxxxx>
> > >>
> > >> diff --git a/block/vpc.c b/block/vpc.c index b6bf52f..0c2eaf8
> > >> 100644
> > >> --- a/block/vpc.c
> > >> +++ b/block/vpc.c
> > >> @@ -198,7 +198,8 @@ static int vpc_open(BlockDriverState *bs, int
> flags)
> > >>      bs->total_sectors = (int64_t)
> > >>          be16_to_cpu(footer->cyls) * footer->heads *
> > >> footer->secs_per_cyl;
> > >>
> > >> -    if (bs->total_sectors >= 65535 * 16 * 255) {
> > >> +    /* Allow a maximum disk size of approximately 2 TB */
> > >> +    if (bs->total_sectors >= 65535LL * 255 * 255)
> > >> + {qemu-devel@xxxxxxxxxx
> > >>          err = -EFBIG;
> > >>          goto fail;
> > >>      }
> > >> @@ -524,19 +525,27 @@ static coroutine_fn int
> > >> vpc_co_write(BlockDriverState *bs, int64_t sector_num,
> > >>   * Note that the geometry doesn't always exactly match
> total_sectors but
> > >>   * may round it down.
> > >>   *
> > >> - * Returns 0 on success, -EFBIG if the size is larger than 127 GB
> > >> + * Returns 0 on success, -EFBIG if the size is larger than ~2 TB.
> > >> + Override
> > >> + * the hardware EIDE and ATA-2 limit of 16 heads (max disk size
> of
> > >> + 127 GB)
> > >> + * and instead allow up to 255 heads.
> > >>   */
> > >>  static int calculate_geometry(int64_t total_sectors, uint16_t*
> cyls,
> > >>      uint8_t* heads, uint8_t* secs_per_cyl)  {
> > >>      uint32_t cyls_times_heads;
> > >>
> > >> -    if (total_sectors > 65535 * 16 * 255)
> > >> +    /* Allow a maximum disk size of approximately 2 TB */
> > >> +    if (total_sectors > 65535LL * 255 * 255) {
> > >>          return -EFBIG;
> > >> +    }
> > >>
> > >>      if (total_sectors > 65535 * 16 * 63) {
> > >>          *secs_per_cyl = 255;
> > >> -        *heads = 16;
> > >> +        if (total_sectors > 65535 * 16 * 255) {
> > >> +            *heads = 255;
> > >> +        } else {
> > >> +            *heads = 16;
> > >> +        }
> > >>          cyls_times_heads = total_sectors / *secs_per_cyl;
> > >>      } else {
> > >>          *secs_per_cyl = 17;
> > >
> > >
> > >
> > >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.