[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 2/2] xen/arm: p2m: Populate pages for GICv2 mapping in arch_domain_create()
On 18/10/2022 00:01, Julien Grall wrote: >>>> Signed-off-by: Henry Wang <Henry.Wang@xxxxxxx> >>>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >>>> --- >>>> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx> >>>> CC: Julien Grall <julien@xxxxxxx> >>>> CC: Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx> >>>> CC: Bertrand Marquis <bertrand.marquis@xxxxxxx> >>>> CC: Henry Wang <Henry.Wang@xxxxxxx> >>>> --- >>>> xen/arch/arm/p2m.c | 43 +++++++++++++++++++++++++++++++++++++++++-- >>>> 1 file changed, 41 insertions(+), 2 deletions(-) >>>> >>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c >>>> index 6826f6315080..76a0e31c6c8c 100644 >>>> --- a/xen/arch/arm/p2m.c >>>> +++ b/xen/arch/arm/p2m.c >>>> @@ -1736,8 +1736,36 @@ void p2m_final_teardown(struct domain *d) >>>> if ( !p2m->domain ) >>>> return; >>>> - ASSERT(page_list_empty(&p2m->pages)); >>>> - ASSERT(page_list_empty(&d->arch.paging.p2m_freelist)); >>>> + /* >>>> + * On the domain_create() error path only, we can end up here >>>> with a >>>> + * non-zero P2M pool. >>>> + * >>>> + * At present, this is a maximum of 16 pages, spread between >>>> p2m->pages >>>> + * and the free list. The domain has never been scheduled (it >>>> has no >>>> + * vcpus), so there is TLB maintenance to perform; just free >>>> everything. >>>> + */ >>>> + if ( !page_list_empty(&p2m->pages) || >>>> + !page_list_empty(&d->arch.paging.p2m_freelist) ) >>>> + { >>>> + struct page_info *pg; >>>> + >>>> + /* >>>> + * There's no sensible "in the domain_create() error path" >>>> predicate, >>>> + * so simply sanity check that we don't have unexpected work >>>> to do. >>>> + */ >>>> + ASSERT(d->arch.paging.p2m_total_pages <= 16); >>>> + >>>> + spin_lock(&d->arch.paging.lock); >>>> + >>>> + while ( (pg = page_list_remove_head(&p2m->pages)) ) >>>> + free_domheap_page(pg); >>>> + while ( (pg = >>>> page_list_remove_head(&d->arch.paging.p2m_freelist)) ) >>>> + free_domheap_page(pg); >>>> + >>>> + d->arch.paging.p2m_total_pages = 0; >>>> + >>>> + spin_unlock(&d->arch.paging.lock); >>>> + } >>> >>> ... you are hardcoding both p2m_teardown() and p2m_set_allocation(). >>> IMO this is not an improvement at all. It is just making the code more >>> complex than necessary and lack all the explanation on the assumptions. >>> >>> So while I am fine with your patch #1 (already reviewed it), there is >>> a better patch from Henry on the ML. So we should take his (rebased) >>> instead of yours. >> >> If by better, you mean something that still has errors, then sure. >> >> There's a really good reason why you cannot safely repurpose >> p2m_teardown(). It's written expecting a fully constructed domain - >> which is fine because that's how it is used. It doesn't cope safely >> with an partially constructed domain. > > It is not 100% clear what is the issue you are referring to as the > VMID is valid at this point. So what part would be wrong? Falling over a bad root pointer from an early construction exit. > But if there are part of p2m_teardown() that are not safe for > partially constructed domain, then we should split the code. This > would be much better that the duplication you are proposing. You have two totally different contexts with different safety requirements. c/s 55914f7fc9 is a reasonably good and clean separation between preemptible and non-preemptible cleanup[1]. You've agreed that the introduction of the non-preemptible path to the preemptible path is a hack and layering violation, and will need undoing later. Others have raised this concern too. Now consider that we're taking the error path without ancillary collateral damage. It: 1) Zeros all the root frames 2) Switches to a remote VMID in order to flush the TLBs, not that they need flushing in the first place 3) For allocated P2M pages, moves them one at a time onto the free list, taking the paging lock for each frame 4) (wrapping the preemptible helper with an ignore loop) finally free the complete pool. ... in a case where 16 is the chosen value because you're already concerned about the hypercall taking too long. Is that safe? Possibly. Is it wise? no. You can't test the error path in question here (because my fault_ttl patches are still pending). "Correctness" is almost exclusively by code inspection. Also realise that you've now split the helper between regular hypercall context, and RCU context, and recall what happened when we finally started asserting that memory couldn't be allocated in stop-machine context. How certain are you that the safety is the same on earlier versions of Xen? What is the likelihood that all of these actions will remain safe given future development? Despite what is being claimed, the attempt to share cleanup logic is introducing fragility and risk, not removing it. This is a bugfix for to a security fix issue which is totally dead on arrival; net safety, especially in older versions of the Xen, is *the highest priority*. These two different contexts don't share any common properties of how to clean up the pool, save freeing the frames back to the memory allocator. In a proper design, this is the hint that they shouldn't share logic either. Given that you do expect someone to spend yet-more time&effort to undo the short term hack currently being proposed, how do you envisage the end result looking? ~Andrew [1] Although the order of actions in p2m_teardown() for the common case is poor. The root pagetables should be cleaned and freed first so steps 1 and 2 of the list above are not repeated for every continuation.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |