[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/3] xen/vt-d: need barriers to workaround CLFLUSH



On 2015/5/5 23:46, Boris Ostrovsky wrote:
On 05/04/2015 05:14 AM, Andrew Cooper wrote:
On 04/05/2015 09:52, Jan Beulich wrote:
On 04.05.15 at 04:16, <tiejun.chen@xxxxxxxxx> wrote:
--- a/xen/drivers/passthrough/vtd/x86/vtd.c
+++ b/xen/drivers/passthrough/vtd/x86/vtd.c
@@ -56,7 +56,9 @@ unsigned int get_cache_line_size(void)
  void cacheline_flush(char * addr)
  {
+    mb();
      clflush(addr);
+    mb();
  }
I think the purpose of the flush is to force write back, not to evict
the cache line, and if so wmb() would appear to be sufficient. As
the SDM says that's not the case, a comment explaining why wmb()
is not sufficient would seem necessary. Plus in the description I
think "serializing" needs to be changed to "fencing", as serialization
is not what we really care about here. If you and the maintainers
agree, I could certainly fix up both aspects while committing.
On the subject of writebacks, we should get around to alternating-up the
use of clflushopt and clwb, either of which would be better than a
clflush in this case (avoiding the need for the leading mfence).

However, the ISA extension document does not indicate which processors
will have support for these new instructions.

https://software.intel.com/sites/default/files/managed/0d/53/319433-022.pdf

We really should add support for this. On shutting down a very large
guest (hundreds of GB) we observed *minutes* spent in flushing IOMMU.
This was due to serializing nature of CLFLUSH.


CLFLUSHOPT/CLWB instructions are ordered only by store-fencing operations, so at least we still need SFENCE.

Thanks
Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.