[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/3] x86/p2m: tighten conditions of IOMMU mapping updates




> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
> bounces@xxxxxxxxxxxxx] On Behalf Of Jan Beulich
> Sent: Tuesday, September 29, 2015 12:59 AM
> To: xen-devel
> Cc: George Dunlap; Andrew Cooper; Tian, Kevin; Keir Fraser; Nakajima, Jun
> Subject: Re: [Xen-devel] [PATCH 1/3] x86/p2m: tighten conditions of IOMMU
> mapping updates
> 
> >>> On 21.09.15 at 16:02, <JBeulich@xxxxxxxx> wrote:
> > In the EPT case permission changes should also result in updates or
> > TLB flushes.
> >
> > In the NPT case the old MFN does not depend on the new entry being
> > valid (but solely on the old one), and the need to update or TLB-flush
> > again also depends on permission changes.
> >
> > In the shadow mode case, iommu_hap_pt_share should be ignored.
> >
> > Furthermore in the NPT/shadow case old intermediate page tables must
> > be freed only after IOMMU side updates/flushes have got carried out.
> >
> > Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> > ---
> > In addition to the fixes here it looks to me as if both EPT and
> > NPT/shadow code lack invalidation of IOMMU side paging structure
> > caches, i.e. further changes may be needed. Am I overlooking something?
> 
> Okay, I think I finally figured it myself (with the help of the partial
> patch presented yesterday): There is no need for more flushing
> in the current model, where we only ever split large pages or fully
> replace sub-hierarchies with large pages (accompanied by a suitable
> flush already). More flushing would only be needed if we were to
> add code to re-establish large pages if an intermediate page table
> re-gained a state where this would be possible (quite desirable
> in a situation where log-dirty mode got temporarily enabled on a
> domain, but I'm not sure how common an operation that would be).
> When splitting large pages, all the hardware can have cached are
> - leaf entries the size of the page being split

I'm not sure if there is an offset-1 issue. 

Basically I think we depends on the "Invalidation Hint" being cleared so that 
all the paging structure cache are cleared. That should work if there is no 
split happen.
However, in the split situation, I suspect that the entry that was split is not 
covered by the ADDR/AM combination since the AM, based on  
iommu_flush_iotlb_psi(), is in fact the "order" parameter for ept_set_entry(). 
I'm not sure if I missed anything. 

* Address (ADDR): For page-selective-within-domain invalidations, the Address 
field indicates the
starting second-level page address of the mappings that needs to be 
invalidated. Hardware
ignores bits 127:(64+N), where N is the maximum guest address width (MGAW) 
supported. This
field is ignored by hardware for global and domain-selective invalidations.
* Address Mask (AM): For page-selective-within-domain invalidations, the 
Address Mask specifies
the number of contiguous second-level pages that needs to be invalidated. The 
encoding for the
AM field is same as the AM field encoding in the Invalidate Address Register 
(see
Section 10.4.8.2). When invalidating a large-page translation, software must 
use the appropriate
Address Mask value (0 for 4KByte page, 9 for 2-MByte page, and 18 for 1-GByte 
page).

Thanks
--jyh

> - leaf entries of smaller size sub-pages (in case large pages don't
>   get entered into the TLB)
> - intermediate entries from above the leaf entry being split (which
>   don't get changed)
> Leaf entries already get flushed correctly, and since the involved
> intermediate entries don't get changed, not flush is needed there.
> 
> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.