[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v4 07/21] IOMMU/x86: support freeing of pagetables
On 03.05.2022 18:20, Roger Pau Monné wrote: > On Mon, Apr 25, 2022 at 10:35:45AM +0200, Jan Beulich wrote: >> For vendor specific code to support superpages we need to be able to >> deal with a superpage mapping replacing an intermediate page table (or >> hierarchy thereof). Consequently an iommu_alloc_pgtable() counterpart is >> needed to free individual page tables while a domain is still alive. >> Since the freeing needs to be deferred until after a suitable IOTLB >> flush was performed, released page tables get queued for processing by a >> tasklet. >> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >> --- >> I was considering whether to use a softirq-tasklet instead. This would >> have the benefit of avoiding extra scheduling operations, but come with >> the risk of the freeing happening prematurely because of a >> process_pending_softirqs() somewhere. > > I'm sorry again if I already raised this, I don't seem to find a > reference. Earlier on you only suggested "to perform the freeing after the flush". > What about doing the freeing before resuming the guest execution in > guest vCPU context? > > We already have a hook like this on HVM in hvm_do_resume() calling > vpci_process_pending(). I wonder whether we could have a similar hook > for PV and keep the pages to be freed in the vCPU instead of the pCPU. > This would have the benefit of being able to context switch the vCPU > in case the operation takes too long. I think this might work in general, but would be troublesome when preparing Dom0 (where we don't run on any of Dom0's vCPU-s, and we won't ever "exit to guest context" on an idle vCPU). I'm also not really fancying to use something like v = current->domain == d ? current : d->vcpu[0]; (leaving aside that we don't really have d available in iommu_queue_free_pgtable() and I'd be hesitant to convert it back). Otoh it might be okay to free page tables right away for domains which haven't run at all so far. But this would again require passing struct domain * to iommu_queue_free_pgtable(). Another upside (I think) of the current approach is that all logic is contained in a single source file (i.e. in particular there's no new field needed in a per-vCPU structure defined in some header). > Not that the current approach is wrong, but doing it in the guest > resume path we could likely prevent guests doing heavy p2m > modifications from hogging CPU time. Well, they would still be hogging time, but that time would then be accounted towards their time slices, yes. >> @@ -550,6 +551,91 @@ struct page_info *iommu_alloc_pgtable(st >> return pg; >> } >> >> +/* >> + * Intermediate page tables which get replaced by large pages may only be >> + * freed after a suitable IOTLB flush. Hence such pages get queued on a >> + * per-CPU list, with a per-CPU tasklet processing the list on the >> assumption >> + * that the necessary IOTLB flush will have occurred by the time tasklets >> get >> + * to run. (List and tasklet being per-CPU has the benefit of accesses not >> + * requiring any locking.) >> + */ >> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list); >> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet); >> + >> +static void free_queued_pgtables(void *arg) >> +{ >> + struct page_list_head *list = arg; >> + struct page_info *pg; >> + unsigned int done = 0; >> + > > With the current logic I think it might be helpful to assert that the > list is not empty when we get here? > > Given the operation requires a context switch we would like to avoid > such unless there's indeed pending work to do. But is that worth adding an assertion and risking to kill a system just because there's a race somewhere by which we might get here without any work to do? If you strongly think we want to know about such instances, how about a WARN_ON_ONCE() (except that we still don't have that specific construct, it would need to be open-coded for the time being)? >> +static int cf_check cpu_callback( >> + struct notifier_block *nfb, unsigned long action, void *hcpu) >> +{ >> + unsigned int cpu = (unsigned long)hcpu; >> + struct page_list_head *list = &per_cpu(free_pgt_list, cpu); >> + struct tasklet *tasklet = &per_cpu(free_pgt_tasklet, cpu); >> + >> + switch ( action ) >> + { >> + case CPU_DOWN_PREPARE: >> + tasklet_kill(tasklet); >> + break; >> + >> + case CPU_DEAD: >> + page_list_splice(list, &this_cpu(free_pgt_list)); > > I think you could check whether list is empty before queuing it? I could, but this would make the code (slightly) more complicated for improving something which doesn't occur frequently. Jan
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |