[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] x86/ept: limit calls to memory_type_changed()
On 22/09/2022 17:05, Roger Pau Monne wrote: > memory_type_changed() is currently only implemented for Intel EPT, and > results in the invalidation of EMT attributes on all the entries in > the EPT page tables. Such invalidation causes EPT_MISCONFIG vmexits > when the guest tries to access any gfns for the first time, which > results in the recalculation of the EMT for the accessed page. The > vmexit and the recalculations are expensive, and as such should be > avoided when possible. > > Remove the call to memory_type_changed() from > XEN_DOMCTL_memory_mapping: there are no modifications of the > iomem_caps ranges anymore that could alter the return of > cache_flush_permitted() from that domctl. > > Calls to memory_type_changed() resulting from changes to the domain > iomem_caps or ioport_caps ranges are only relevant for EMT > calculations if the IOMMU is not enabled, and the call has resulted in > a change to the return value of cache_flush_permitted(). This, and the perf problem Citrix have found, is caused by a more fundamental bug which I identified during XSA-402. Xen is written with assumption that cacheability other than WB is dependent on having devices. While this is perhaps true of current configurations available, it is a layering violation, and will cease being true in order to support encrypted RAM (and by extension, encrypted VMs). At the moment, we know the IOMMU-ness of a domain right from the outset, but the cacheability permits are dynamic, based on the non-emptyness of the domain's device list, ioport list, and various others. All the memory_type_changed() calls here are to cover the fact that the original design was buggy by not having the cacheability-ness part of domain create in the first place. The appropriate fix, but definitely 4.18 work at this point, is to have a new CDF flag which permits the use of non-WB cacheability. For testing purposes alone, turning it on on an otherwise "plain VM" is useful (its how I actually debugged XSA-402, and the only sane way to go about investigating the MTRR per disasters for VGPU VMs[1]), but for regular usecases, it wants cross-checking with the IOMMU flag (and encrypted VM flag in the future), and for all dynamic list checks to turn into a simple 'd->config & CDF_full_cacheability'. This way, we delete all calls to memory_type_changed() which are trying to cover the various dynamic lists becoming empty/non-empty, and we remove several ordering-of-hypercalls bugs where non-cacheable mappings can't actually be created on a VM declared to have an IOMMU until a device has actually been assigned to start with. ~Andrew [1] MTRR handling is also buggy with reduced cacheability, causing some areas of RAM to be used UC; notably the grant table. This manifests as PV device perf being worse than qemu-emulated device perf, only when a GPU is added to a VM[2]. Instead of fixing this properly, it was hacked around by forcing IPAT=1 for Xenheap pages, which only "fixed" the problem on Intel (AMD has no equivalent mechanism), and needs reverting and fixing properly (i.e. get the vMTRR layout working correctly) to support VMs with encrypted RAM. [2] There's a second bug with memory_type_changed() in that it causes dreadful system performance during VM migration, which is something to do with the interaction of restoring vMTRRs for a VM that has a device but isn't running yet. This still needs investigating, and I suspect it's got a similar root cause.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |