| 
    
 [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v5 03/15] IOMMU/x86: support freeing of pagetables
 On 31.05.2022 18:25, Roger Pau Monné wrote:
> On Fri, May 27, 2022 at 01:13:09PM +0200, Jan Beulich wrote:
>> @@ -566,6 +567,98 @@ struct page_info *iommu_alloc_pgtable(st
>>      return pg;
>>  }
>>  
>> +/*
>> + * Intermediate page tables which get replaced by large pages may only be
>> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
>> + * per-CPU list, with a per-CPU tasklet processing the list on the 
>> assumption
>> + * that the necessary IOTLB flush will have occurred by the time tasklets 
>> get
>> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
>> + * requiring any locking.)
>> + */
>> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
>> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
>> +
>> +static void free_queued_pgtables(void *arg)
>> +{
>> +    struct page_list_head *list = arg;
>> +    struct page_info *pg;
>> +    unsigned int done = 0;
>> +
>> +    while ( (pg = page_list_remove_head(list)) )
>> +    {
>> +        free_domheap_page(pg);
>> +
>> +        /* Granularity of checking somewhat arbitrary. */
>> +        if ( !(++done & 0x1ff) )
>> +             process_pending_softirqs();
> 
> Hm, I'm wondering whether we really want to process pending softirqs
> here.
> 
> Such processing will prevent the watchdog from triggering, which we
> likely want in production builds.  OTOH in debug builds we should make
> sure that free_queued_pgtables() doesn't take longer than a watchdog
> window, or else it's likely to cause issues to guests scheduled on
> this same pCPU (and calling process_pending_softirqs() will just mask
> it).
Doesn't this consideration apply to about every use of the function we
already have in the code base?
Jan
 
  | 
  
![]()  | 
            
         Lists.xenproject.org is hosted with RackSpace, monitoring our  |