[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/6] Align blkif protocol values to 512B sectors


  • To: Tu Dinh <ngoc-tu.dinh@xxxxxxxxxx>, "win-pv-devel@xxxxxxxxxxxxxxxxxxxx" <win-pv-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Owen Smith <owen.smith@xxxxxxxxxx>
  • Date: Wed, 29 Apr 2026 11:15:42 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gxFn+yjPIDU6hiaHvJptH//pBqfSOHqeJ4RbaqttJFA=; b=OAU0gItqQA1zmLnB+gx4UQpRCVebqe9fuIcBjGPg0MhWpBNHbZmpioNLtY6C0468GtbcpiZ3QAy8rmwRH0EQIqU1w8PbPzItdRZ/9yIdgqBidkrp7SSUcLlIGyRbTC0eK9eIudu2TOCyVyRMNRPy4KPZFIOLsY2qsLqhLuO65Mm6b16uyQCIWsbCIcWfFtanKX2oecFAKclE/Vq8d0XmRA9cEDFfms+my+BJTcG3j/uDtAAIPpintXE3m4ET5KigKp7VIzzbvdCb3y2LeuFUuEgWKKMbqX2q82CESJ5mzjk5z9BF+YcqCEVW0ACg1KlxfexROldAGLiRsk2TPl1ReA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Jyi2kZVz6pqRKzx3I60H88gvP0nA0Ei7q0iWuP7UiEOM+UMgIzf0HIPUWQULe81947iBFCOOajhXgkq0b7BsH0BxgEvW53MV2qqCOdqKKxgMOuPUfmpI1EBM3Ph0tRH4M8/DzPmAdAqkFWIhwT+R2hB3GGmQTimmDYTBTqhT7pPUlgYiWnEhOgSPQ5+1QxPcRa/4sI5SFg4xuFHZU+LZdj302fIIq0d4hfv6NFdq214OT8Tn3Ayo5da0YVqc7CzJUaxaIUBSyyBrIhlh+DGhWwws8i0qD/ErUSL3E7KoHll0OkHIGbtXRBEQQCvAvP8Bn8jkmT+kwSirao/WTbx6sg==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=selector1 header.d=citrix.com header.i="@citrix.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:x-ms-exchange-senderadcheck"
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Delivery-date: Wed, 29 Apr 2026 11:15:50 +0000
  • List-id: Developer list for the Windows PV Drivers subproject <win-pv-devel.lists.xenproject.org>
  • Msip_labels:
  • Thread-index: AQHc1u3LxAxR8B3e8E+TWGCXs7r9Q7X14P7r
  • Thread-topic: [PATCH 0/6] Align blkif protocol values to 512B sectors

This series looks like a good update, and passes basic testing.

Entire series
Reviewed-by: Owen Smith <owen.smith@xxxxxxxxxx>

________________________________________
From: Tu Dinh <ngoc-tu.dinh@xxxxxxxxxx>
Sent: 28 April 2026 10:02 AM
To: win-pv-devel@xxxxxxxxxxxxxxxxxxxx
Cc: Tu Dinh; Owen Smith
Subject: [PATCH 0/6] Align blkif protocol values to 512B sectors

Xen revision 221f2748e8da deprecated feature-large-sector-size and
clarified that all protocol-level sector sizes are based on 512-byte
units. This is the behavior observed on Linux blkback, and deviating
from this will lead to corrupting the virtual disk.

Tested on Linux blkback with a loop device using 4K blocksize. The guest
used a purely emulated disk "xvdz", as QEMU doesn't support more than
512B logical sector size in emulated devices. Note that Qdisk is still
broken wrt. 221f2748e8da.

Tu Dinh (6):
  Update to latest blkif.h
  Stop reporting feature-large-sector-size
  Align blkif protocol values to 512B sectors
  Centralize VBD extent checking
  xencrsh: Stop reporting feature-large-sector-size
  xencrsh: Align blkif protocol values to 512B sectors

 include/xen/io/blkif.h | 103 ++++++++++++++++++++++++++++-------------
 src/xencrsh/frontend.c |  30 ++++++------
 src/xencrsh/frontend.h |  10 +++-
 src/xencrsh/pdo.c      | 103 +++++++++++++++++++++++++++--------------
 src/xenvbd/frontend.c  |  90 +++++++++++++++++++++--------------
 src/xenvbd/frontend.h  |  36 +++++++++++++-
 src/xenvbd/ring.c      |  60 ++++++++++++------------
 src/xenvbd/target.c    |  37 ++++++++-------
 8 files changed, 302 insertions(+), 167 deletions(-)

--
2.54.0.windows.1


--
Ngoc Tu Dinh | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.