[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/hvm: Clean up track_dirty_vram() calltree
On 23.07.2020 11:40, Andrew Cooper wrote: > On 22/07/2020 17:13, Jan Beulich wrote: >> On 22.07.2020 17:15, Andrew Cooper wrote: >>> * Rename nr to nr_frames. A plain 'nr' is confusing to follow in the the >>> lower levels. >>> * Use DIV_ROUND_UP() rather than opencoding it in several different ways >>> * The hypercall input is capped at uint32_t, so there is no need for >>> nr_frames to be unsigned long in the lower levels. >>> >>> No functional change. >>> >>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >> Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx> >> >> I'd like to note though that ... >> >>> --- a/xen/arch/x86/mm/hap/hap.c >>> +++ b/xen/arch/x86/mm/hap/hap.c >>> @@ -58,16 +58,16 @@ >>> >>> int hap_track_dirty_vram(struct domain *d, >>> unsigned long begin_pfn, >>> - unsigned long nr, >>> + unsigned int nr_frames, >>> XEN_GUEST_HANDLE(void) guest_dirty_bitmap) >>> { >>> long rc = 0; >>> struct sh_dirty_vram *dirty_vram; >>> uint8_t *dirty_bitmap = NULL; >>> >>> - if ( nr ) >>> + if ( nr_frames ) >>> { >>> - int size = (nr + BITS_PER_BYTE - 1) / BITS_PER_BYTE; >>> + unsigned int size = DIV_ROUND_UP(nr_frames, BITS_PER_BYTE); >> ... with the change from long to int this construct will now no >> longer be correct for the (admittedly absurd) case of a hypercall >> input in the range of [0xfffffff9,0xffffffff]. We now fully >> depend on this getting properly rejected at the top level hypercall >> handler (which limits to 1Gb worth of tracked space). > > I don't see how this makes any difference at all. > > Exactly the same would be true in the old code for an input in the range > [0xfffffffffffffff9,0xffffffffffffffff], where the aspect which protects > you is the fact that the hypercall ABI truncates to 32 bits. Exactly: The hypercall ABI won't change. The GB(1) check up the call tree may go away, without the then arising issue being noticed. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |