[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 06/18] arm/altp2m: Add a(p2m) table flushing routines.
Hi Julien, On 07/04/2016 05:42 PM, Julien Grall wrote: > Hello Sergej, > > On 04/07/16 13:12, Sergej Proskurin wrote: >>> +/* Reset this p2m table to be empty */ >>> +static void p2m_flush_table(struct p2m_domain *p2m) >>> +{ >>> + struct page_info *top, *pg; >>> + mfn_t mfn; >>> + unsigned int i; >>> + >>> + /* Check whether the p2m table has already been flushed before. */ >>> + if ( p2m->root == NULL) >>> + return; >>> + >>> + spin_lock(&p2m->lock); >>> + >>> + /* >>> + * "Host" p2m tables can have shared entries &c that need a bit >>> more care >>> + * when discarding them >>> + */ >>> + ASSERT(!p2m_is_hostp2m(p2m)); >>> + >>> + /* Zap the top level of the trie */ >>> + top = p2m->root; >>> + >>> + /* Clear all concatenated first level pages */ >>> + for ( i = 0; i < P2M_ROOT_PAGES; i++ ) >>> + { >>> + mfn = _mfn(page_to_mfn(top + i)); >>> + clear_domain_page(mfn); >>> + } >>> + >>> + /* Free the rest of the trie pages back to the paging pool */ >>> + while ( (pg = page_list_remove_head(&p2m->pages)) ) >>> + if ( pg != top ) >>> + { >>> + /* >>> + * Before freeing the individual pages, we clear them >>> to prevent >>> + * reusing old table entries in future p2m allocations. >>> + */ >>> + mfn = _mfn(page_to_mfn(pg)); >>> + clear_domain_page(mfn); >>> + free_domheap_page(pg); >>> + } >> >> At this point, we prevent only the first root level page from being >> freed. In case there are multiple consecutive first level pages, one of >> them will be freed in the upper loop (and potentially crash the guest if >> the table is reused at a later point in time). However, testing for >> every concatenated page in the if clause of the while loop would further >> decrease the flushing performance. Thus, my question is, whether there >> is a good way to solve this issue? > > The root pages are not part of p2m->pages, so there is no issue. Thanks for clearing that up. We have already discussed this in patch #04. > > Regards, > Thank you. Best regards, Sergej _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |