[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [v9][PATCH 02/16] xen/vtd: create RMRR mapping
RMRR reserved regions must be setup in the pfn space with an identity mapping to reported mfn. However existing code has problem to setup correct mapping when VT-d shares EPT page table, so lead to problem when assigning devices (e.g GPU) with RMRR reported. So instead, this patch aims to setup identity mapping in p2m layer, regardless of whether EPT is shared or not. And we still keep creating VT-d table. And we also need to introduce a pair of helper to create/clear this sort of identity mapping as follows: set_identity_p2m_entry(): If the gfn space is unoccupied, we just set the mapping. If space is already occupied by desired identity mapping, do nothing. Otherwise, failure is returned. clear_identity_p2m_entry(): We just define macro to wrapper guest_physmap_remove_page() with a returning value as necessary. CC: Tim Deegan <tim@xxxxxxx> CC: Keir Fraser <keir@xxxxxxx> CC: Jan Beulich <jbeulich@xxxxxxxx> CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> CC: Yang Zhang <yang.z.zhang@xxxxxxxxx> CC: Kevin Tian <kevin.tian@xxxxxxxxx> Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx> Reviewed-by: Tim Deegan <tim@xxxxxxx> Acked-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx> Signed-off-by: Tiejun Chen <tiejun.chen@xxxxxxxxx> --- v6 ~ v9: * Nothing is changed. v5: * Fold our original patch #2 and #3 as this new * Introduce a new, clear_identity_p2m_entry, which can wrapper guest_physmap_remove_page(). And we use this to clean our identity mapping. v4: * Change that orginal condition, if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm ) to make sure we catch those invalid mfn mapping as we expected. * To have if ( !paging_mode_translate(p2m->domain) ) return 0; at the start, instead of indenting the whole body of the function in an inner scope. * extend guest_physmap_remove_page() to return a value as a proper unmapping helper * Instead of intel_iommu_unmap_page(), we should use guest_physmap_remove_page() to unmap rmrr mapping correctly. * Drop iommu_map_page() since actually ept_set_entry() can do this internally. xen/arch/x86/mm/p2m.c | 40 +++++++++++++++++++++++++++++++++++-- xen/drivers/passthrough/vtd/iommu.c | 5 ++--- xen/include/asm-x86/p2m.h | 13 +++++++++--- 3 files changed, 50 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 6b39733..99a26ca 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -584,14 +584,16 @@ p2m_remove_page(struct p2m_domain *p2m, unsigned long gfn, unsigned long mfn, p2m->default_access); } -void +int guest_physmap_remove_page(struct domain *d, unsigned long gfn, unsigned long mfn, unsigned int page_order) { struct p2m_domain *p2m = p2m_get_hostp2m(d); + int rc; gfn_lock(p2m, gfn, page_order); - p2m_remove_page(p2m, gfn, mfn, page_order); + rc = p2m_remove_page(p2m, gfn, mfn, page_order); gfn_unlock(p2m, gfn, page_order); + return rc; } int @@ -898,6 +900,40 @@ int set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, return set_typed_p2m_entry(d, gfn, mfn, p2m_mmio_direct, access); } +int set_identity_p2m_entry(struct domain *d, unsigned long gfn, + p2m_access_t p2ma) +{ + p2m_type_t p2mt; + p2m_access_t a; + mfn_t mfn; + struct p2m_domain *p2m = p2m_get_hostp2m(d); + int ret; + + if ( !paging_mode_translate(p2m->domain) ) + return 0; + + gfn_lock(p2m, gfn, 0); + + mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL); + + if ( p2mt == p2m_invalid || p2mt == p2m_mmio_dm ) + ret = p2m_set_entry(p2m, gfn, _mfn(gfn), PAGE_ORDER_4K, + p2m_mmio_direct, p2ma); + else if ( mfn_x(mfn) == gfn && p2mt == p2m_mmio_direct && a == p2ma ) + ret = 0; + else + { + ret = -EBUSY; + printk(XENLOG_G_WARNING + "Cannot setup identity map d%d:%lx," + " gfn already mapped to %lx.\n", + d->domain_id, gfn, mfn_x(mfn)); + } + + gfn_unlock(p2m, gfn, 0); + return ret; +} + /* Returns: 0 for success, -errno for failure */ int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn) { diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c index 44ed23d..8415958 100644 --- a/xen/drivers/passthrough/vtd/iommu.c +++ b/xen/drivers/passthrough/vtd/iommu.c @@ -1839,7 +1839,7 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map, while ( base_pfn < end_pfn ) { - if ( intel_iommu_unmap_page(d, base_pfn) ) + if ( clear_identity_p2m_entry(d, base_pfn, 0) ) ret = -ENXIO; base_pfn++; } @@ -1855,8 +1855,7 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map, while ( base_pfn < end_pfn ) { - int err = intel_iommu_map_page(d, base_pfn, base_pfn, - IOMMUF_readable|IOMMUF_writable); + int err = set_identity_p2m_entry(d, base_pfn, p2m_access_rw); if ( err ) return err; diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h index b49c09b..190a286 100644 --- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -503,9 +503,9 @@ static inline int guest_physmap_add_page(struct domain *d, } /* Remove a page from a domain's p2m table */ -void guest_physmap_remove_page(struct domain *d, - unsigned long gfn, - unsigned long mfn, unsigned int page_order); +int guest_physmap_remove_page(struct domain *d, + unsigned long gfn, + unsigned long mfn, unsigned int page_order); /* Set a p2m range as populate-on-demand */ int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn, @@ -543,6 +543,13 @@ int set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn, p2m_access_t access); int clear_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn); +/* Set identity addresses in the p2m table (for pass-through) */ +int set_identity_p2m_entry(struct domain *d, unsigned long gfn, + p2m_access_t p2ma); + +#define clear_identity_p2m_entry(d, gfn, page_order) \ + guest_physmap_remove_page(d, gfn, gfn, page_order) + /* Add foreign mapping to the guest's p2m table. */ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn, unsigned long gpfn, domid_t foreign_domid); -- 1.9.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |