|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] open-coded page list manipulation in shadow code
At 08:36 +0000 on 30 Jan (1422603387), Jan Beulich wrote:
> >>> On 29.01.15 at 18:30, <tim@xxxxxxx> wrote:
> > OK, here's what I have. It keeps the mechanism the same, threading on
> > the main page list entry, but tidies it up to use the page_list operations
> > rather than open-coding. I've tested it (lightly - basically jsut
> > boot testing) with 32bit, 32pae and 64bit Windows VMs (all SMP), with
> > bigmem=y and bigmem=n.
> >
> > It was developed on top of your bigmem series, but it should apply OK
> > before patch 3/4, removing the need to nobble shadow-paging in that
> > patch.
>
> Thanks, looks quite okay, just a couple of remarks.
>
> > @@ -1525,6 +1495,14 @@ mfn_t shadow_alloc(struct domain *d,
> > if ( shadow_type >= SH_type_min_shadow
> > && shadow_type <= SH_type_max_shadow )
> > sp->u.sh.head = 1;
> > +
> > +#ifndef PAGE_LIST_NULL
> > + /* The temporary list-head is on our stack. Blank out the
> > + * pointers to it in the shadows, just to get a clean failure if
> > + * we accidentally follow them. */
> > + tmp_list.next->prev = tmp_list.prev->next = NULL;
> > +#endif
>
> Perhaps better to use LIST_POISON{1,2} here, so we don't point into
> PV VA space?
Yep, will do.
> > --- a/xen/arch/x86/mm/shadow/private.h
> > +++ b/xen/arch/x86/mm/shadow/private.h
> > @@ -318,6 +318,33 @@ static inline int mfn_oos_may_write(mfn_t gmfn)
> > }
> > #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */
> >
> > +/* Figure out the size (in pages) of a given shadow type */
> > +static inline u32
> > +shadow_size(unsigned int shadow_type)
> > +{
> > + static const u32 type_to_size[SH_type_unused] = {
> > + 1, /* SH_type_none */
> > + 2, /* SH_type_l1_32_shadow */
> > + 2, /* SH_type_fl1_32_shadow */
> > + 4, /* SH_type_l2_32_shadow */
> > + 1, /* SH_type_l1_pae_shadow */
> > + 1, /* SH_type_fl1_pae_shadow */
> > + 1, /* SH_type_l2_pae_shadow */
> > + 1, /* SH_type_l2h_pae_shadow */
> > + 1, /* SH_type_l1_64_shadow */
> > + 1, /* SH_type_fl1_64_shadow */
> > + 1, /* SH_type_l2_64_shadow */
> > + 1, /* SH_type_l2h_64_shadow */
> > + 1, /* SH_type_l3_64_shadow */
> > + 1, /* SH_type_l4_64_shadow */
> > + 1, /* SH_type_p2m_table */
> > + 1, /* SH_type_monitor_table */
> > + 1 /* SH_type_oos_snapshot */
> > + };
> > + ASSERT(shadow_type < SH_type_unused);
> > + return type_to_size[shadow_type];
> > +}
>
> Isn't this going to lead to multiple instances of that static array?
Urgh, maybe. I'd have thought this would end up as a common symbol
(if that's the term I mean - one where the linker will merge identical
copies) but I didn't check what actually happens. I'll move it back
into a .c and just have the lookup function in the header.
> > +#ifndef PAGE_LIST_NULL
> > + /* The temporary list-head is on our stack. Blank out the
> > + * pointers to it in the shadows, just to get a clean failure if
> > + * we accidentally follow them. */
> > + tmp_list.next->prev = tmp_list.prev->next = NULL;
> > +#endif
>
> Same here. Perhaps worth putting into another inline helper?
Yep, will do so for v2.
Cheers,
Tim.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |