[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] GPU passthrough performance regression in >4GB vms due to XSA-60 changes


We've recently updated from Xen 4.3.1 to 4.3.2 and found out a major regression in gpu passthrough performance in VMs using >4GB of memory. When using GPU pt (some radeon cards, also intergrated intel gpu pt), load on cpu is constantly near maximum and screen is slow to update. The machines are intel haswell/ivybridge laptops/desktops, the guests are windows 7 64-bit HVMs.

I've bisected the failure to be due to XSA-60 changes, specifically:

commit e81d0ac25464825b3828cff5dc9e8285612992c4
Author: Liu Jinsong <jinsong.liu@xxxxxxxxx>
Date:   Mon Dec 9 14:26:03 2013 +0100

    VMX: remove the problematic set_uc_mode logic

This commit seems to have removed a bit of logic which, when guest was setting cache disable bit in CR0 for a brief time, was iterating on all mapped pfns and resetting memory type in EPTs to be consistent with the result of mtrr.c:epte_get_entry_emt() call. I believe my tracing indicates this used to return WRITEBACK caching strategy for the 64bit memory areas where the BARs of the gpu seem to be located.

This bit of code is not happening anymore, speculatively I think the PCI BAR area stays as uncached which causes the general slowness. Note that I'm not talking about slow performance during the window the CR0 has caching disabled, it does stays slow even after guest reenables it shortly after since the problem seems to be a side effect of removed loop setting some default EPT policies on all pfns. Reintroducing the removed loop fixes the problem.

Would welcome comments/ideas how to debug this more, or maybe there's an obvious fix.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.