[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 4/9] mm: Scrub memory from idle loop
>>>> +bool scrub_free_pages(void) >>>> { >>>> struct page_info *pg; >>>> unsigned int zone, order; >>>> unsigned long i; >>>> + unsigned int cpu = smp_processor_id(); >>>> + bool preempt = false; >>>> + nodeid_t node; >>>> >>>> - ASSERT(spin_is_locked(&heap_lock)); >>>> + /* >>>> + * Don't scrub while dom0 is being constructed since we may >>>> + * fail trying to call map_domain_page() from scrub_one_page(). >>>> + */ >>>> + if ( system_state < SYS_STATE_active ) >>>> + return false; >>> I assume that's because of the mapcache vcpu override? That's x86 >>> specific though, so the restriction here ought to be arch specific. >>> Even better would be to find a way to avoid this restriction >>> altogether, as on bigger systems only one CPU is actually busy >>> while building Dom0, so all others could be happily scrubbing. Could >>> that override become a per-CPU one perhaps? >> Is it worth doing though? What you are saying below is exactly why I >> simply return here --- there were very few dirty pages. > Well, in that case the comment should cover this second reason as > well, at the very least. > >> This may change >> if we decide to use idle-loop scrubbing for boot scrubbing as well (as >> Andrew suggested earlier) but there is little reason to do it now IMO. > Why not? In fact I had meant to ask why your series doesn't > include that? This felt to me like a separate feature. > >>> Otoh there's not much to scrub yet until Dom0 had all its memory >>> allocated, and we know which pages truly remain free (wanting >>> what is currently the boot time scrubbing done on them). But that >>> point in time may still be earlier than when we switch to >>> SYS_STATE_active. > IOW I think boot scrubbing could be kicked off as soon as Dom0 > had the bulk of its memory allocated. Since we only are trying to avoid mapcache vcpu override can't we just scrub whenever override is NULL (per-cpu or not)? > >>>> @@ -1065,16 +1131,29 @@ static void scrub_free_pages(unsigned int node) >>>> pg[i].count_info &= ~PGC_need_scrub; >>>> node_need_scrub[node]--; >>>> } >>>> + if ( softirq_pending(cpu) ) >>>> + { >>>> + preempt = true; >>>> + break; >>>> + } >>> Isn't this a little too eager, especially if you didn't have to scrub >>> the page on this iteration? >> What would be a good place then? Count how actually scrubbed pages and >> check for pending interrupts every so many? > Yes. > >> Even if we don't scrub at all walking whole heap can take a while. > Correct - you can't skip this check altogether even if no page > requires actual scrubbing. But then how will counting help --- we want to periodically check even when count is zero? Once per zone? That still may be too infrequent since we may have, for example, one dirty page per order and so we are scanning whole order but bumping the count by only one. -boris _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |