[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
On 07.07.2020 10:44, Julien Grall wrote: > Hi, > > On 06/07/2020 09:46, Jan Beulich wrote: >> On 04.07.2020 19:23, Julien Grall wrote: >>> Hi, >>> >>> On 03/07/2020 11:11, Roger Pau Monné wrote: >>>> On Fri, Jul 03, 2020 at 11:56:38AM +0200, Jan Beulich wrote: >>>>> On 03.07.2020 11:44, Roger Pau Monné wrote: >>>>>> On Thu, Jul 02, 2020 at 06:23:28PM +0200, Michał Leszczyński wrote: >>>>>>> ----- 2 lip 2020 o 11:00, Roger Pau Monné roger.pau@xxxxxxxxxx >>>>>>> napisał(a): >>>>>>> >>>>>>>> On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote: >>>>>>>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h >>>>>>>>> index 59bdc28c89..7b8289d436 100644 >>>>>>>>> --- a/xen/include/public/domctl.h >>>>>>>>> +++ b/xen/include/public/domctl.h >>>>>>>>> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain { >>>>>>>>> uint32_t max_evtchn_port; >>>>>>>>> int32_t max_grant_frames; >>>>>>>>> int32_t max_maptrack_frames; >>>>>>>>> + uint8_t vmtrace_pt_order; >>>>>>>> >>>>>>>> I've been thinking about this, and even though this is a domctl (so >>>>>>>> not a stable interface) we might want to consider using a size (or a >>>>>>>> number of pages) here rather than an order. IPT also supports >>>>>>>> TOPA mode (kind of a linked list of buffers) that would allow for >>>>>>>> sizes not rounded to order boundaries to be used, since then only each >>>>>>>> item in the linked list needs to be rounded to an order boundary, so >>>>>>>> you could for example use three 4K pages in TOPA mode AFAICT. >>>>>>>> >>>>>>>> Roger. >>>>>>> >>>>>>> In previous versions it was "size" but it was requested to change it >>>>>>> to "order" in order to shrink the variable size from uint64_t to >>>>>>> uint8_t, because there is limited space for xen_domctl_createdomain >>>>>>> structure. >>>>>> >>>>>> It's likely I'm missing something here, but I wasn't aware >>>>>> xen_domctl_createdomain had any constrains regarding it's size. It's >>>>>> currently 48bytes which seems fairly small. >>>>> >>>>> Additionally I would guess a uint32_t could do here, if the value >>>>> passed was "number of pages" rather than "number of bytes"? >>> Looking at the rest of the code, the toolstack accepts a 64-bit value. >>> So this would lead to truncation of the buffer if it is bigger than 2^44 >>> bytes. >>> >>> I agree such buffer is unlikely, yet I still think we want to harden the >>> code whenever we can. So the solution is to either prevent check >>> truncation in libxl or directly use 64-bit in the domctl. >>> >>> My preference is the latter. >>> >>>> >>>> That could work, not sure if it needs to state however that those will >>>> be 4K pages, since Arm can have a different minimum page size IIRC? >>>> (or that's already the assumption for all number of frames fields) >>>> vmtrace_nr_frames seems fine to me. >>> >>> The hypercalls interface is using the same page granularity as the >>> hypervisor (i.e 4KB). >>> >>> While we already support guest using 64KB page granularity, it is >>> impossible to have a 64KB Arm hypervisor in the current state. You are >>> going to either break existing guest (if you switch to 64KB page >>> granularity for the hypercall ABI) or render them insecure (the mimimum >>> mapping in the P2M would be 64KB). >>> >>> DOMCTLs are not stable yet, so using a number of pages is OK. However, I >>> would strongly suggest to use a number of bytes for any xl/libxl/stable >>> libraries interfaces as this avoids confusion and also make more >>> futureproof. >> >> If we can't settle on what "page size" means in the public interface >> (which imo is embarrassing), then how about going with number of kb, >> like other memory libxl controls do? (I guess using Mb, in line with >> other config file controls, may end up being too coarse here.) This >> would likely still allow for a 32-bit field to be wide enough. > > A 32-bit field would definitely not be able to cover a full address > space. So do you mind to explain what is the upper bound you expect here? Do you foresee a need for buffer sizes of 4Tb and up? Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |