|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 22/34] x86/mm: put HVM only code under CONFIG_HVM
On Tue, Aug 21, 2018 at 02:07:22AM -0600, Jan Beulich wrote:
> >>> On 17.08.18 at 17:12, <wei.liu2@xxxxxxxxxx> wrote:
> > --- a/xen/arch/x86/mm/Makefile
> > +++ b/xen/arch/x86/mm/Makefile
> > @@ -2,8 +2,9 @@ subdir-y += shadow
> > subdir-y += hap
> >
> > obj-y += paging.o
> > -obj-y += p2m.o p2m-pt.o p2m-ept.o p2m-pod.o
> > -obj-y += altp2m.o
> > +obj-y += p2m.o p2m-pt.o p2m-pod.o
> > +obj-$(CONFIG_HVM) += p2m-ept.o
> > +obj-$(CONFIG_HVM) += altp2m.o
>
> PoD is HVM-only too.
>
> > --- a/xen/arch/x86/mm/hap/Makefile
> > +++ b/xen/arch/x86/mm/hap/Makefile
> > @@ -3,7 +3,7 @@ obj-y += guest_walk_2level.o
> > obj-y += guest_walk_3level.o
> > obj-y += guest_walk_4level.o
> > obj-y += nested_hap.o
> > -obj-y += nested_ept.o
> > +obj-$(CONFIG_HVM) += nested_ept.o
>
> As follows from the reply to the previous patch, the entire hap/ should
> be excluded when !HVM.
OK. I will put pod and hap under CONFIG_HVM as well.
>
> > --- a/xen/arch/x86/mm/p2m.c
> > +++ b/xen/arch/x86/mm/p2m.c
> > @@ -531,7 +531,7 @@ struct page_info *p2m_get_page_from_gfn(
> > int p2m_set_entry(struct p2m_domain *p2m, gfn_t gfn, mfn_t mfn,
> > unsigned int page_order, p2m_type_t p2mt, p2m_access_t
> > p2ma)
> > {
> > - struct domain *d = p2m->domain;
> > + struct domain * __maybe_unused d = p2m->domain;
>
> This presumably again is a result of using a macro somewhere which
> doesn't evaluate its argument(s)?
No. In this case it is because the code which uses d is guarded by
CONFIG_HVM.
>
> > @@ -2165,6 +2159,14 @@ int unmap_mmio_regions(struct domain *d,
> > return i == nr ? 0 : i ?: ret;
> > }
> >
> > +#if CONFIG_HVM
>
> #ifdef (also elsewhere; did I overlook similar issues in earlier patches?)
>
OK.
Wei.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |