|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH] PVH: cleanup of p2m upon p2m destroy
>>> On 18.12.13 at 02:01, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> On Tue, 17 Dec 2013 08:42:32 +0000 "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> > +/* This function to do any cleanup while walking the entire p2m */
>> > +int p2m_cleanup(struct domain *d)
>> > +{
>> > + int rc = 0;
>> > + unsigned long gfn, count = 0;
>> > + struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> > +
>> > + if ( !is_pvh_domain(d) )
>> > + return 0;
>> > +
>> > + for ( gfn = p2m->next_foreign_gfn_to_check;
>> > + gfn <= p2m->max_mapped_pfn; gfn++, count++ )
>> > + {
>> > + p2m_type_t p2mt;
>> > + mfn_t mfn = get_gfn_query(d, gfn, &p2mt);
>> > +
>> > + if ( unlikely(p2m_is_foreign(p2mt)) )
>> > + put_page(mfn_to_page(mfn));
>> > + put_gfn(d, gfn);
>> > +
>> > + /* Preempt every 10k pages, arbritary */
>> > + if ( count == 10000 && hypercall_preempt_check() )
>> > + {
>> > + p2m->next_foreign_gfn_to_check = gfn + 1;
>> > + rc = -EAGAIN;
>> > + break;
>> > + }
>>
>> So it looks like you copied one of the two obvious bugs from
>> relinquish_shared_pages() _and_ deferred the preemption point
>
> LOL...
If that makes you laugh, I would imply you should have seen the
(as said quite obvious) bugs and not only hyve avoided copying
one of them, but also submit a fix to the original code.
You did neither, and hence I assume you didn't pay too much
attention to how what you copy actually works...
> > by quite a bit - 10,000 pages is quite a lot, the 512 used there
> > seems much more reasonable.
>
> Because the foreign pages are very small amount, so the loop is
> mostly not doing anything. ok, will reduce it to 1k.
Right - in which case biasing just like done in the fix to the original
code is probably the right approach here too.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |