[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen-unstable-staging: Xen BUG at iommu_map.c:455



At 16:31 +0100 on 05 May (1430843498), Jan Beulich wrote:
> >>> On 05.05.15 at 17:17, <tim@xxxxxxx> wrote:
> > At 16:10 +0100 on 05 May (1430842206), Jan Beulich wrote:
> >> From what I
> >> can tell (and assuming other code works correctly) the fact that
> >> arch_iommu_populate_page_table() sets d->need_iommu to -1
> >> first thing should make sure that any subsequent changes to the
> >> p2m get propagated to IOMMU code for setting up respective
> >> mappings.
> > 
> > Yes, but might they then be overridden by the previous mapping when
> > this new code calls map_page()?
> 
> Ah, I see now.
> 
> > This seems like a case where we should be using get_gfn()/put_gfn().
> 
> Yes - provided these may be called at all with the page_alloc_lock
> held. IOW - is there lock ordering defined between this one and
> the various mm locks?

Good point.  The page_alloc lock nests inside the p2m lock, for PoD
(see page_alloc_mm_pre_lock() in mm-locks.h).  So we can't call p2m
operations here.

> Also, if doing so, would I then need to check the result of the
> inverse (p2m) translation after having done get_gfn() to make
> sure this is still the MFN I'm after? If so, and if it ends up being
> a different one, I'd have to retry and presumably somehow limit
> the number of retries...

Yes.  Ideally this loop would be iterating over all gfns in the p2m
rather than over all owned MFNs.  As long as needs_iommu gets set
first, such a loop could safely be paused and restarted without
worrying about concurrent updates.  The code sould even stay in this
file, though exposing an iterator from the p2m code would be a lot
more efficient.

In the meantime the patch you linked to is an improvement, so it can
have my ack.

Cheers,

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.