[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [V1 PATCH 06/11] PVH dom0: construct_dom0 changes
>>> On 15.11.13 at 03:34, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote: > On Tue, 12 Nov 2013 16:35:05 +0000 > "Jan Beulich" <JBeulich@xxxxxxxx> wrote: > >> >>> On 09.11.13 at 02:23, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> >> >>> wrote: > ...... >> These look more complicated than necessary (largely due to the >> casts), but be it that way. >> >> > + pl4e = l4start + l4_table_offset(v_start); >> >... >> > + *pl4e = l4e_from_pfn(get_gpfn_from_mfn(l4e_get_pfn(*pl4e)), >> > + l4e_get_flags(*pl4e)); >> >> But this one I told before needs to be in a loop. You must not make >> assumptions on guest virtual address space layout, and hence you >> must not assume none of the initial mapping crosses an L4 boundary. > > Ah right, my bad. > >> And once these are in a loop, getting the earlier two loops simplified >> (using l4e_empty() rather than plain memset()) and ordered properly >> (the head part before the main loop, the tail part after) will be >> almost obvious legibility cleanups. > > Actually, the loop will go from v_start->L4 to v_end->L4. The memsets > are clearing the entries before v_start and after v_end. So, I'm > thinking something like: > > clear entries before v_start using memset > clear entries after v_end using memset > > for ( pl4e = l4start + l4_table_offset(v_start); > pl4e <= l4start + l4_table_offset(v_end); > pl4e++) > { > pl3e = map_l3t_from_l4e(*pl4e); > for ( i = 0; i < PAGE_SIZE / sizeof(*pl3e); i++, pl3e++ ) > { > ... > } > > Look ok? Yes, with the leading part getting cleared prior to the loop, and the trailing part after (thus allowing to not re-initialized pl4e in each loop's for()). And the ending condition of course if off by one above: You really want pl4e <= l4start + l4_table_offset(v_end - 1). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |