|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v6 3/7] xen/x86: populate PVHv2 Dom0 physical memory map
On Mon, Feb 13, 2017 at 06:53:49AM -0700, Jan Beulich wrote:
> >>> On 10.02.17 at 13:33, <roger.pau@xxxxxxxxxx> wrote:
> > ---
> > Changes since v5:
> > - Adjust the logic to set need_paging.
> > - Remove the usage of the _AC macro.
> > - Subtract memory from the end of regions instead of the start.
> > - Create the VM86_TSS before the identity page table, so that the page
> > table
> > is aligned to a page boundary.
> > - Use MB1_PAGES in modify_identity_mmio.
> > - Move and simply the ASSERT in pvh_setup_p2m.
> > - Move the creation of the PSE page tables to a separate function, and use
> > it
> > in shadow_enable also.
> > - Make the map modify_identiy_mmio parameter a constant.
> > - Add a comment to HVM_VM86_TSS_SIZE, although it seems this might need
> > further fixing.
>
> Indeed, I think this wants to be re-based on top of the patch I've
> just sent (with you Cc-ed).
Sure. Just done that.
> > +static int __init pvh_steal_ram(struct domain *d, unsigned long size,
> > + unsigned long align, paddr_t limit,
> > + paddr_t *addr)
> > +{
> > + unsigned int i = d->arch.nr_e820;
> > +
> > + /*
> > + * Alignment 0 should be set to 1, so it doesn't wrap around in the
> > + * calculations below.
> > + */
> > + align = align ? : 1;
> > + while ( i-- )
> > + {
> > + struct e820entry *entry = &d->arch.e820[i];
> > +
> > + if ( entry->type != E820_RAM || entry->addr + entry->size > limit
> > ||
> > + entry->addr < MB(1) )
> > + continue;
> > +
> > + *addr = (entry->addr + entry->size - size) & ~(align - 1);
>
> Without taking the present callers into account (who don't request
> huge alignment) this (theoretically) risks running into the low 1Mb.
> I see two options - either add a comment clarifying that an entry
> will never cross the 1Mb boundary (and hence the issue can't
> occur), or limit the alignment (by way of ASSERT()) to PAGE_SIZE
> (in which case no significant harm would result from crossing the
> boundary).
I don't mind adding the ASSERT, but I don't see how this can risk running into
the low 1MB. If entry->addr < MB(1) the entry is skipped. If an entry crosses
the 1Mb boundary it will certainly have entry->addr < 1Mb.
> > +static int __init pvh_setup_vmx_realmode_helpers(struct domain *d)
> > +{
> > + p2m_type_t p2mt;
> > + uint32_t rc, *ident_pt;
> > + uint8_t *tss;
> > + mfn_t mfn;
> > + paddr_t gaddr;
> > +
> > + /*
> > + * Steal some space from the last RAM region below 4GB and use it to
> > + * store the real-mode TSS.
> > + */
> > + if ( !pvh_steal_ram(d, HVM_VM86_TSS_SIZE, 0, GB(4), &gaddr) &&
>
> Please request at least 128-byte alignment here, to avoid the
> base TSS structure crossing a page boundary.
Right, this TSS is loaded while using the identity PT, so with paging enabled.
> > + (tss = map_domain_gfn(p2m_get_hostp2m(d), _gfn(PFN_DOWN(gaddr)),
> > + &mfn, &p2mt, 0, &rc)) )
> > + {
> > + memset(tss, 0, HVM_VM86_TSS_SIZE);
>
> What makes you assume that you've mapped all the space you've
> allocated?
Hm, right, I've just realized we don't really need to map anything here, a
hvm_copy_to_guest_phys with NULL should be enough, and then I don't need to
worry about boundaries.
> > --- a/xen/include/asm-x86/page.h
> > +++ b/xen/include/asm-x86/page.h
> > @@ -374,6 +374,18 @@ perms_strictly_increased(uint32_t old_flags, uint32_t
> > new_flags)
> > return ((of | (of ^ nf)) == nf);
> > }
> >
> > +/* Build a 32bit PSE page table using 4MB pages. */
> > +static inline void
> > +write_32bit_pse_identmap(uint32_t *l2)
>
> Why does this need to be an inline function?
Given it's size and the low number of callers I though it would be better to
make it inline, but since this is not in any performance critical path I'm
going to remove the inlining, although I think the compiler is probably going
to do it anyway.
Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |