[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 4/8] mm: Scrub memory from idle loop



On 07/31/2017 11:20 AM, Jan Beulich wrote:
>>>> Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx> 07/23/17 4:14 AM >>>
>>>> @@ -1050,17 +1120,42 @@ static void scrub_free_pages(unsigned int node)
>>>> -                if ( node_need_scrub[node] == 0 )
>>>> -                    return;
>>>> +                if ( preempt || (node_need_scrub[node] == 0) )
>>>> +                    goto out;
>>>>               }
>>>>           } while ( order-- != 0 );
>>>>       }
>>>> +
>>>> + out:
>>>> +    spin_unlock(&heap_lock);
>>>> +    node_clear(node, node_scrubbing);
>>>> +    return softirq_pending(cpu) || (node_to_scrub(false) != NUMA_NO_NODE);
>>> While I can see why you use it here, the softirq_pending() looks sort of
>>> misplaced: While invoking it twice in the caller will look a little odd too,
>>> I still think that's where the check belongs.
>>
>> scrub_free_pages is called from idle loop as
>>
>> else if ( !softirq_pending(cpu) && !scrub_free_pages() )
>> pm_idle();
>>
>> so softirq_pending() is unnecessary here.
>>
>> (Not sure why you are saying it would be invoked twice)
> That was sort of implicit - the caller would want to become
>
>
>     else if ( !softirq_pending(cpu) && !scrub_free_pages() && 
> !softirq_pending(cpu) )
>     pm_idle();
>
> to account for the fact that a softirq may become pending while scrubbing.

That would look really odd IMO.

Would

else if ( !softirq_pending(cpu) )
    if ( !scrub_free_pages() && !softirq_pending(cpu) )
       pm_idle();

or 

else if ( !softirq_pending(cpu) && !scrub_free_pages() )
    if ( !softirq_pending(cpu) )
        pm_idle();

 
be better? (I'd prefer the first)

-boris



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.