[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 07/21] IOMMU/x86: support freeing of pagetables
On Thu, May 05, 2022 at 10:20:36AM +0200, Jan Beulich wrote: > On 04.05.2022 17:06, Roger Pau Monné wrote: > > On Wed, May 04, 2022 at 03:07:24PM +0200, Jan Beulich wrote: > >> On 03.05.2022 18:20, Roger Pau Monné wrote: > >>> On Mon, Apr 25, 2022 at 10:35:45AM +0200, Jan Beulich wrote: > >>>> For vendor specific code to support superpages we need to be able to > >>>> deal with a superpage mapping replacing an intermediate page table (or > >>>> hierarchy thereof). Consequently an iommu_alloc_pgtable() counterpart is > >>>> needed to free individual page tables while a domain is still alive. > >>>> Since the freeing needs to be deferred until after a suitable IOTLB > >>>> flush was performed, released page tables get queued for processing by a > >>>> tasklet. > >>>> > >>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> > >>>> --- > >>>> I was considering whether to use a softirq-tasklet instead. This would > >>>> have the benefit of avoiding extra scheduling operations, but come with > >>>> the risk of the freeing happening prematurely because of a > >>>> process_pending_softirqs() somewhere. > >>> > >>> I'm sorry again if I already raised this, I don't seem to find a > >>> reference. > >> > >> Earlier on you only suggested "to perform the freeing after the flush". > >> > >>> What about doing the freeing before resuming the guest execution in > >>> guest vCPU context? > >>> > >>> We already have a hook like this on HVM in hvm_do_resume() calling > >>> vpci_process_pending(). I wonder whether we could have a similar hook > >>> for PV and keep the pages to be freed in the vCPU instead of the pCPU. > >>> This would have the benefit of being able to context switch the vCPU > >>> in case the operation takes too long. > >> > >> I think this might work in general, but would be troublesome when > >> preparing Dom0 (where we don't run on any of Dom0's vCPU-s, and we > >> won't ever "exit to guest context" on an idle vCPU). I'm also not > >> really fancying to use something like > >> > >> v = current->domain == d ? current : d->vcpu[0]; > > > > I guess a problematic case would also be hypercalls executed in a > > domain context triggering the freeing of a different domain iommu page > > table pages. As then the freeing would be accounted to the current > > domain instead of the owner of the pages. > > Aiui such can happen only during domain construction. Any such > operation behind the back of a running guest is imo problematic. > > > dom0 doesn't seem that problematic, any freeing triggered on a system > > domain context could be performed in place (with > > process_pending_softirqs() calls to ensure no watchdog triggering). > > > >> (leaving aside that we don't really have d available in > >> iommu_queue_free_pgtable() and I'd be hesitant to convert it back). > >> Otoh it might be okay to free page tables right away for domains > >> which haven't run at all so far. > > > > Could be, but then we would have to make hypercalls that can trigger > > those paths preemptible I would think. > > Yes, if they aren't already and if they allow for freeing of > sufficiently large numbers of pages. That's kind of another argument > against doing so right here, isn't it? Indeed, as it's likely to make the implementation more complex IMO. So let's use this pCPU implementation. Thanks, Roger.
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |