[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [for-4.15][PATCH v2 1/5] xen/x86: p2m: Don't map the special pages in the IOMMU page-tables
Hi Jan, On 10/02/2021 11:26, Jan Beulich wrote: On 09.02.2021 16:28, Julien Grall wrote:From: Julien Grall <jgrall@xxxxxxxxxx> Currently, the IOMMU page-tables will be populated early in the domain creation if the hardware is able to virtualize the local APIC. However, the IOMMU page tables will not be freed during early failure and will result to a leak. An assigned device should not need to DMA into the vLAPIC page, so we can avoid to map the page in the IOMMU page-tables.Here and below, may I ask that you use the correct term "APIC access page", as there are other pages involved in vLAPIC handling (in particular the virtual APIC page, which is where accesses actually go to that translate to the APIC access page in EPT).This statement is also true for any special pages (the vLAPIC page is one of them). So to take the opportunity to prevent the mapping for all of them.I probably should have realized this earlier, but there is a downside to this: A guest wanting to core dump itself may want to dump e.g. shared info and vcpu info pages. Hence ...--- a/xen/include/asm-x86/p2m.h +++ b/xen/include/asm-x86/p2m.h @@ -919,6 +919,10 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn) { unsigned int flags;+ /* Don't map special pages in the IOMMU page-tables. */+ if ( mfn_valid(mfn) && is_special_page(mfn_to_page(mfn)) ) + return 0;... instead of is_special_page() I think you want to limit the check here to seeing whether PGC_extra is set. But as said on irc, since this crude way of setting up the APIC access page is now firmly a problem, I intend to try to redo it. Given this series needs to go in 4.15 (we would introduce a 0-day otherwise), could you clarify whether your patch [1] is intended to replace this one in 4.15? Cheers, [1] <1b6a411b-84e7-bfb1-647e-511a13df838c@xxxxxxxx> -- Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |