[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH for-4.14] mm: fix public declaration of struct xen_mem_acquire_resource
On 23.06.2020 19:26, Roger Pau Monné wrote: > On Tue, Jun 23, 2020 at 06:18:52PM +0200, Jan Beulich wrote: >> On 23.06.2020 17:56, Roger Pau Monné wrote: >>> On Tue, Jun 23, 2020 at 05:02:04PM +0200, Jan Beulich wrote: >>>> On 23.06.2020 15:52, Roger Pau Monne wrote: >>>>> XENMEM_acquire_resource and it's related structure is currently inside >>>>> a __XEN__ or __XEN_TOOLS__ guarded section to limit it's scope to the >>>>> hypervisor or the toolstack only. This is wrong as the hypercall is >>>>> already being used by the Linux kernel at least, and as such needs to >>>>> be public. >>>>> >>>>> Also switch the usage of uint64_aligned_t to plain uint64_t, as >>>>> uint64_aligned_t is only to be used by the toolstack. Note that the >>>>> layout of the structure will be the same, as the field is already >>>>> naturally aligned to a 8 byte boundary. >>>> >>>> I'm afraid it's more complicated, and hence ... >>>> >>>>> No functional change expected. >>>> >>>> ... this doesn't hold: As I notice only now the struct also wrongly >>>> (if it was meant to be a tools-only interface) failed to use >>>> XEN_GUEST_HANDLE_64(), which would in principle have been a problem >>>> for 32-bit callers (passing garbage in the upper half of a handle). >>>> It's not an actual problem because, unlike would typically be the >>>> case for tools-only interfaces, there is compat translation for this >>>> struct. >>> >>> Yes, there's already compat translation for the frame_list array. >>> >>>> With this, however, the problem of your change becomes noticeable: >>>> The structure's alignment for 32-bit x86, and with it the structure's >>>> sizeof(), both change. You should be able to see this by comparing >>>> xen/common/compat/memory.c's disassembly before and after your >>>> change - the number of bytes to copy by the respective >>>> copy_from_guest() ought to have changed. >>> >>> Right, this is the layout with frame aligned to 8 bytes: >>> >>> struct xen_mem_acquire_resource { >>> uint16_t domid; /* 0 2 */ >>> uint16_t type; /* 2 2 */ >>> uint32_t id; /* 4 4 */ >>> uint32_t nr_frames; /* 8 4 */ >>> uint32_t pad; /* 12 4 */ >>> uint64_t frame; /* 16 8 */ >>> long unsigned int * frame_list; /* 24 4 */ >>> >>> /* size: 32, cachelines: 1, members: 7 */ >>> /* padding: 4 */ >>> /* last cacheline: 32 bytes */ >>> }; >>> >>> And this is without: >>> >>> struct xen_mem_acquire_resource { >>> uint16_t domid; /* 0 2 */ >>> uint16_t type; /* 2 2 */ >>> uint32_t id; /* 4 4 */ >>> uint32_t nr_frames; /* 8 4 */ >>> uint32_t pad; /* 12 4 */ >>> uint64_t frame; /* 16 8 */ >>> long unsigned int * frame_list; /* 24 4 */ >>> >>> /* size: 28, cachelines: 1, members: 7 */ >>> /* last cacheline: 28 bytes */ >>> }; >>> >>> Note Xen has already been copying those extra 4 padding bytes from >>> 32bit Linux kernels AFAICT, as the struct declaration in Linux has >>> dropped the aligned attribute. >>> >>>> The question now is whether we're willing to accept such a (marginal) >>>> incompatibility. The chance of a 32-bit guest actually running into >>>> it shouldn't be very high, but with the right placement of an >>>> instance of the struct at the end of a page it would be possible to >>>> observe afaict. >>>> >>>> Or whether we go the alternative route and pad the struct suitably >>>> for 32-bit. >>> >>> I guess we are trapped with what Linux has on it's headers now, so we >>> should handle the struct size difference in Xen? >> >> How do you mean to notice the difference in Xen? You don't know >> what the caller's environment's sizeof() would yield. > > I'm confused. Couldn't we switch from uint64_aligned_t to plain > uint64_t (like it's currently on the Linux headers), and then use the > compat layer in Xen to handle the size difference when called from > 32bit environments? And which size would we use there? The old or the new one (breaking future or existing callers respectively)? Meanwhile I think that if this indeed needs to not be tools-only (which I still question), then our only possible route is to add explicit padding for the 32-bit case alongside the change you're already making. > This would of course assume that no toolstack has implemented direct > calls using this interface, which seems likely because it either > returns mfns to be mapped in the PV case or require gfns to be > provided for HVM. What are "direct calls" in the case here? We aren't talking about a dmop interface, but a memop one here, are we? It's merely an interface that an ioreq server implementation ought to use to get hold of the ring pages (i.e. there's only a weak connection to dmop). Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |