[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH for-4.12 v2 3/7] x86/pvh: reorder PVH dom0 iommu initialization
So that the iommu is initialized before populating the p2m, and entries added get the corresponding iommu page table entries if required. This requires splitting the current pvh_setup_p2m into two different functions. One that crafts dom0 physmap and sets the paging allocation, and another one that actually populates the p2m with RAM regions. Note that this allows to remove the special casing done for the low 1MB in hwdom_iommu_map. Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> --- Cc: Jan Beulich <jbeulich@xxxxxxxx> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> --- Changes since v1: - New in this version. --- The previous flow, where the iommu was enabled after having populated the p2m was used to workaround an issue on an old pre-Haswell Intel box. I no longer have that box, and when I tested PVH dom0 on it was before the iommu map-reserved fixes. If hardware needing the iommu page tables to contain RAM regions is found I'm happy to work on other solutions, like performing the setup of the iommu at the start of dom0_construct_pvh but only enabling it when dom0 p2m is fully populated. --- xen/arch/x86/hvm/dom0_build.c | 35 +++++++++++++++++++---------- xen/drivers/passthrough/x86/iommu.c | 7 +----- 2 files changed, 24 insertions(+), 18 deletions(-) diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c index a571d15c13..aa599f09ef 100644 --- a/xen/arch/x86/hvm/dom0_build.c +++ b/xen/arch/x86/hvm/dom0_build.c @@ -409,14 +409,10 @@ static __init void pvh_setup_e820(struct domain *d, unsigned long nr_pages) ASSERT(cur_pages == nr_pages); } -static int __init pvh_setup_p2m(struct domain *d) +static void __init pvh_init_p2m(struct domain *d) { - struct vcpu *v = d->vcpu[0]; unsigned long nr_pages = dom0_compute_nr_pages(d, NULL, 0); - unsigned int i; - int rc; bool preempted; -#define MB1_PAGES PFN_DOWN(MB(1)) pvh_setup_e820(d, nr_pages); do { @@ -425,6 +421,14 @@ static int __init pvh_setup_p2m(struct domain *d) &preempted); process_pending_softirqs(); } while ( preempted ); +} + +static int __init pvh_populate_p2m(struct domain *d) +{ + struct vcpu *v = d->vcpu[0]; + unsigned int i; + int rc; +#define MB1_PAGES PFN_DOWN(MB(1)) /* * Memory below 1MB is identity mapped initially. RAM regions are @@ -1134,13 +1138,6 @@ int __init dom0_construct_pvh(struct domain *d, const module_t *image, printk(XENLOG_INFO "*** Building a PVH Dom%d ***\n", d->domain_id); - rc = pvh_setup_p2m(d); - if ( rc ) - { - printk("Failed to setup Dom0 physical memory map\n"); - return rc; - } - /* * NB: MMCFG initialization needs to be performed before iommu * initialization so the iommu code can fetch the MMCFG regions used by the @@ -1148,8 +1145,22 @@ int __init dom0_construct_pvh(struct domain *d, const module_t *image, */ pvh_setup_mmcfg(d); + /* + * Craft dom0 physical memory map and set the paging allocation. This must + * be done before the iommu initializion, since iommu initialization code + * will likely add mappings required by devices to the p2m (ie: RMRRs). + */ + pvh_init_p2m(d); + iommu_hwdom_init(d); + rc = pvh_populate_p2m(d); + if ( rc ) + { + printk("Failed to setup Dom0 physical memory map\n"); + return rc; + } + rc = pvh_load_kernel(d, image, image_headroom, initrd, bootstrap_map(image), cmdline, &entry, &start_info); if ( rc ) diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c index a88ef9b189..42b1a1bbc3 100644 --- a/xen/drivers/passthrough/x86/iommu.c +++ b/xen/drivers/passthrough/x86/iommu.c @@ -151,12 +151,7 @@ static bool __hwdom_init hwdom_iommu_map(const struct domain *d, * inclusive mapping additionally maps in every pfn up to 4GB except those * that fall in unusable ranges for PV Dom0. */ - if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) || - /* - * Ignore any address below 1MB, that's already identity mapped by the - * Dom0 builder for HVM. - */ - (!d->domain_id && is_hvm_domain(d) && pfn < PFN_DOWN(MB(1))) ) + if ( (pfn > max_pfn && !mfn_valid(mfn)) || xen_in_range(pfn) ) return false; switch ( type = page_get_ram_type(mfn) ) -- 2.20.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |