[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] xen: fix qdisk BLKIF_OP_DISCARD for 32/64 word size mix
On 17/06/16 11:37, Paul Durrant wrote: >> -----Original Message----- >> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Jan >> Beulich >> Sent: 17 June 2016 10:26 >> To: Juergen Gross >> Cc: Anthony Perard; xen-devel; sstabellini@xxxxxxxxxx; qemu- >> devel@xxxxxxxxxx; kraxel@xxxxxxxxxx >> Subject: Re: [Xen-devel] [PATCH v2] xen: fix qdisk BLKIF_OP_DISCARD for >> 32/64 word size mix >> >>>>> On 17.06.16 at 11:14, <JGross@xxxxxxxx> wrote: >>> In case the word size of the domU and qemu running the qdisk backend >>> differ BLKIF_OP_DISCARD will not work reliably, as the request >>> structure in the ring have different layouts for different word size. >>> >>> Correct this by copying the request structure in case of different >>> word size element by element in the BLKIF_OP_DISCARD case, too. >>> >>> The easiest way to achieve this is to resync hw/block/xen_blkif.h with >>> its original source from the Linux kernel. >>> >>> Signed-off-by: Juergen Gross <jgross@xxxxxxxx> >>> --- >>> V2: resync with Linux kernel version of hw/block/xen_blkif.h as >>> suggested by Paul Durrant >> >> Oh, I didn't realize he suggested syncing with the Linux variant. >> Why not with the canonical one? I have to admit that I particularly >> dislike Linux'es strange union-izng, mainly because of it requiring >> this myriad of __attribute__((__packed__)). >> > > Yes, it's truly grotesque and such things should be blown away with extreme > prejudice. Sorry, I'm confused now. Do you still mandate for the resync or not? Resyncing with elimination of all the __packed__ stuff seems not to be a proper alternative as this would require a major rework. So either I do a resync and keep this stuff or I don't resync at all. Juergen _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |