[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCHv2 2/3] mm: don't free pages until mm locks are released
On 02/12/15 16:46, Tim Deegan wrote: > At 16:30 +0000 on 02 Dec (1449073841), George Dunlap wrote: >> On 02/12/15 16:23, Tim Deegan wrote: >>> At 07:25 +0000 on 02 Dec (1449041100), Tian, Kevin wrote: >>>>> From: David Vrabel [mailto:david.vrabel@xxxxxxxxxx] >>>>> Sent: Saturday, November 14, 2015 2:50 AM >>>>> >>>>> If a page is freed without translations being invalidated, and the page is >>>>> subsequently allocated to another domain, a guest with a cached >>>>> translation will still be able to access the page. >>>>> >>>>> Currently translations are invalidated before releasing the page ref, but >>>>> while still holding the mm locks. To allow translations to be invalidated >>>>> without holding the mm locks, we need to keep a reference to the page >>>>> for a bit longer in some cases. >>>>> >>>>> [ This seems difficult to a) verify as correct; and b) difficult to get >>>>> correct in the future. A better suggestion would be useful. Perhaps >>>>> using something like pg->tlbflush_needed mechanism that already exists >>>>> for pages from PV guests? ] >>>> >>>> Per-page flag looks clean in general, but not an expert here. Tim might >>>> have a better idea. >>> >>> I think you can probably use the tlbflush_timestamp stuff as-is for >>> EPT flushes -- the existing TLB shootdowns already drop all EPT >>> translations. >> >> Are you saying that if you do a TLB shootdown you don't need to do an >> invept command? > > Yes, I think so. flush_area_local() -> hvm_flush_guest_tlbs() -> > hvm_asid_flush_core() should DTRT. Looks like that ends up causing vpid_sync_all(), which executes invvpid. I presume that's different than invept. But perhaps we could extend the basic functionality to call invept when we need it. -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |