[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH 3/3] Implement 3-level event channel routines.
>>> On 09.01.13 at 11:56, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote: > On Wed, 2013-01-09 at 08:38 +0000, Jan Beulich wrote: >> >>> On 08.01.13 at 18:33, Wei Liu <Wei.Liu2@xxxxxxxxxx> wrote: >> > On Thu, 2013-01-03 at 11:35 +0000, Jan Beulich wrote: >> >> >>> On 31.12.12 at 19:22, Wei Liu <wei.liu2@xxxxxxxxxx> wrote: >> >> > >> >> > +static void __unmap_l2_sel(struct vcpu *v) >> >> > +{ >> >> > + unsigned long mfn; >> >> > + >> >> > + if ( v->evtchn_pending_sel_l2 != 0 ) >> >> > + { >> >> > + unmap_domain_page_global(v->evtchn_pending_sel_l2); >> >> > + mfn = virt_to_mfn(v->evtchn_pending_sel_l2); >> >> >> >> virt_to_mfn() is not valid on the output of >> >> map_domain_page{,_global}() (same further down). Yes, there is >> >> at least one example of this in the existing code, but that's wrong >> >> too, and is getting eliminated by the 16Tb patch series I'm in the >> >> process of putting together. >> >> >> > >> > So what's the correct way to do this? Do I need to wait for your patch >> > series? >> >> Considering that the old 32-bit case of map_domain_page() >> doesn't matter anymore, > > We still care about it on 32-bit ARM FWIW. (I@m not sure if that > invalidates your advice below though) Right - which would require ARM to implement domain_page_map_to_mfn() (so far used only in x86 and tmem code). >> using domain_page_map_to_mfn() >> here would be the way to go (while the 32-bit version of it >> didn't allow map_domain_page_global() to be handled through >> it, my 64-bit implementation of it will). >> >> And of course there's the obvious alternative of storing the >> MFN rather than re-obtaining it in the teardown path. This second option of course would alway be valid, without architecture specific changes. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |