[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH-for-9.0 04/10] hw/xen: Factor xen_arch_align_ioreq_data() out of handle_ioreq()



On Tue, 2023-11-14 at 08:58 +0100, Philippe Mathieu-Daudé wrote:
> > > Reviewing quickly hw/block/dataplane/xen-block.c, this code doesn't
> > > seem target specific at all IMHO. Otherwise I'd really expect it to
> > > fail compiling. But I don't know much about Xen, so I'll let block &
> > > xen experts to have a look.
> > 
> > Where it checks dataplane->protocol and does different things for
> > BLKIF_PROTOCOL_NATIVE/BLKIF_PROTOCOL_X86_32/BLKIF_PROTOCOL_X86_64, the
> > *structures* it uses are intended to be using the correct ABI. I think
> > the structs for BLKIF_PROTOCOL_NATIVE may actually be *different*
> > according to the target, in theory?
> 
> OK I see what you mean, blkif_back_rings_t union in hw/block/xen_blkif.h
> 
> These structures shouldn't differ between targets, this is the point of
> an ABI :) 

Structures like that *shouldn't* differ between targets, but the Xen
struct blkif_request does:

typedef blkif_vdev_t uint16_t;
struct blkif_request {
    uint8_t        operation;    /* BLKIF_OP_???                         */
    uint8_t        nr_segments;  /* number of segments                   */
    blkif_vdev_t   handle;       /* only for read/write requests         */
    uint64_t       id;           /* private guest value, echoed in resp  */
    blkif_sector_t sector_number;/* start sector idx on disk (r/w only)  */
    struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
};

This is why we end up with explicit versions for x86-32 and x86-64,
with the 'id' field aligned explicitly to 4 or 8 bytes. The 'native'
version when we build it in qemu will just use the *host* ABI to decide
how to align it.

> And if they were, they wouldn't compile as target agnostic.

The words "wouldn't compile" gives me fantasies of a compiler that
literally errors out, saying "you can't use that struct in arch-
agnostic code because the padding is different on different
architectures".

It isn't so. You're just a tease.

With the exception of the x86-{32,64} special case, the
BLKIF_PROTOCOL_NATIVE support ends up using the *host* ABI, regardless
of which target it was aimed at.

Which might even be OK; are there any other targets which align
uint64_t to 4 bytes, like i386 does? And certainly isn't *your* problem
anyway.

What I was thinking was that if we *do* need to do something to make
BLKIF_PROTOCOL_NATIVE actually differ between targets, maybe that would
mean we really *do* want to build this code separately for each target
rather than just once? 

I suppose *if* we fix it, we can fix it in a way that doesn't require
specific compilation. Like we already did for x86. I literally use
object_dynamic_cast(qdev_get_machine(), "x86-machine") there.

> 
> > I think this series makes it look like target-agnostic support *should*
> > work... but it doesn't really?
> 
> For testing we have:
> 
> aarch64: tests/avocado/boot_xen.py
> x86_64: tests/avocado/kvm_xen_guest.py
> 
> No combination with i386 is tested,
> Xen within aarch64 KVM is not tested (not sure it works).

No, there is support in the *kernel* for Xen guests, which hasn't been
added to AArch64 KVM.

Attachment: smime.p7s
Description: S/MIME cryptographic signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.