[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 5 of 6] [RFC] x86/mm: use wait queues for mem_paging
Hi, At 14:45 +0100 on 24 Feb (1330094744), Olaf Hering wrote: > > #ifdef __x86_64__ > > + if ( p2m_is_paging(*t) && (q & P2M_ALLOC) > > + && p2m->domain == current->domain ) > > + { > > + if ( locked ) > > + gfn_unlock(p2m, gfn, 0); > > + > > + /* Ping the pager */ > > + if ( *t == p2m_ram_paging_out || *t == p2m_ram_paged ) > > + p2m_mem_paging_populate(p2m->domain, gfn); > > + > > + /* Wait until the pager finishes paging it in */ > > + current->arch.mem_paging_gfn = gfn; > > + wait_event(current->arch.mem_paging_wq, ({ > > + int done; > > + mfn = p2m->get_entry(p2m, gfn, t, a, 0, page_order); > > + done = (*t != p2m_ram_paging_in); > > I assume p2m_mem_paging_populate() will not return until the state is > forwarded to p2m_ram_paging_in. Maybe p2m_is_paging(*t) would make it > more obvious what this check is supposed to do. But it would be wrong. If the type anything other than p2m_ram_paging_in, then we can't be sure that the pager is working on unblocking us. Andres made the same suggestion - clearly this code needs a comment. :) > > + /* Safety catch: it _should_ be safe to wait here > > + * but if it's not, crash the VM, not the host */ > > + if ( in_atomic() ) > > + { > > + WARN(); > > + domain_crash(p2m->domain); > > + done = 1; > > + } > > + done; > > + })); > > + goto again; > > + } > > +#endif > > void p2m_mem_paging_populate(struct domain *d, unsigned long gfn) > > { > > struct vcpu *v = current; > > @@ -965,6 +1001,7 @@ void p2m_mem_paging_populate(struct doma > > p2m_access_t a; > > mfn_t mfn; > > struct p2m_domain *p2m = p2m_get_hostp2m(d); > > + int send_request = 0; > > Is that variable supposed to be used? Erk. Clearly something got mangled in the rebase. I'll sort that out. > Perhaps the feature to fast-forward (or rollback) from > p2m_ram_paging_out to p2m_ram_rw could be a separate patch. My initial > version of this patch did not have a strict requirement for this > feature, if I remember correctly. Sure, I can split that into a separate patch. Cheers, Tim. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |