[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 5/7] powerpc/mm: support nested lazy_mmu sections



On 05/09/2025 17:52, Alexander Gordeev wrote:
> On Thu, Sep 04, 2025 at 01:57:34PM +0100, Kevin Brodsky wrote:
> ...
>>  static inline lazy_mmu_state_t arch_enter_lazy_mmu_mode(void)
>>  {
>>      struct ppc64_tlb_batch *batch;
>> +    int lazy_mmu_nested;
>>  
>>      if (radix_enabled())
>>              return LAZY_MMU_DEFAULT;
>> @@ -39,9 +40,14 @@ static inline lazy_mmu_state_t 
>> arch_enter_lazy_mmu_mode(void)
>>       */
>>      preempt_disable();
>>      batch = this_cpu_ptr(&ppc64_tlb_batch);
>> -    batch->active = 1;
>> +    lazy_mmu_nested = batch->active;
>>  
>> -    return LAZY_MMU_DEFAULT;
>> +    if (!lazy_mmu_nested) {
> Why not just?
>
>       if (!batch->active) {

Very fair question! I think the extra variable made sense in an earlier
version of that patch, but now it's used only once and doesn't really
improve readability either. Will remove it in v2, also in patch 6
(basically the same code). Thanks!

- Kevin



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.