[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-ia64-devel] [Q] about assign_domain_page_replace
Thank you very much for information. It greatly helped. Could you try the attached patch with the previsou patch? I hope it fixes the foreign domain page mapping case. However there remains the grant table mapping issue. On Fri, Jun 08, 2007 at 10:01:30PM +0900, Akio Takebe wrote: Content-Description: Mail message body > Hi, Isaku > > I'm sorry for lately reply. > > >On Wed, Jun 06, 2007 at 04:31:16PM +0900, Akio Takebe wrote: > > > >> >domain_page_flush_and_put() isn't aware of NULL-owner pages, I'll fix it. > >> >However more issues seem to be there. > > > >I sent out the patch which fixes p2m exposure issues. > >But I don't think that the patch doesn't resolve your issues. > >At least it only conceals issues. It's worse than panic. > > > Yes, as you said, when I retried with your patch, > dom0 panic is not occurred. > I attach the xm dmesg log. (it use the attached patch) > > >> > >> The result of booting domU is below. > > > >There seems to be several issues. Hmmmm... > >What domains correspond to 32896 and 61177984? > >I guesss > > _domain=61177984 => dom0 > > _domain=32896 => newly created domain. > >Is those correct? > Yes, you're right. > > >foreign domain page mapping doesn't seem to work correctly. > >domain builder seems to map a page which belongs to newly created domain. > >If so, it shouldn't be that old_mfn->count_info=0, > >old_mfn->u.inuse._domain=0. > >Can you confirm the following? > > > > - Does dom0 issue this hypercall? (i.e. what's currnet domain?) > > > Yes. > This issue seems to be happend at the following hypercalls. > xen_ia64_dom_fw_setup: xen_ia64_dom_fw_map (hypercall page=0x0) > xen_ia64_dom_fw_setup: xen_ia64_dom_fw_map (fw table=0x2000) > xen_ia64_dom_fw_setup: xen_ia64_dom_fw_map (acpi table=0x1000) > > > When I add more debug messages, I get the following message. > I specified 512MB of domU memory, so page of gmfn=0x7ffd is start_info page. > And page of mfn_or_gmfn =0x0 is fw pages. > > (XEN) __dom0vp_add_physmap:d 0xf000000007bf0080 domid 0 gpfn 0x1024d6f > mfn_or_gmfn 0x1659 flags 0x0 domid 1 > (XEN) __dom0vp_add_physmap:d 0xf000000007bf0080 domid 0 gpfn 0x1026800 > mfn_or_gmfn 0x7ffc flags 0x0 domid 1 > (XEN) __dom0vp_add_physmap:d 0xf000000007bf0080 domid 0 gpfn 0x1026800 > mfn_or_gmfn 0x7ffd flags 0x0 domid 1 > (XEN) __dom0vp_add_physmap:d 0xf000000007bf0080 domid 0 gpfn 0x1026800 > mfn_or_gmfn 0x0 flags 0x0 domid 1 > (XEN) __dom0vp_add_physmap:d 0xf000000007bf0080 domid 0 gpfn 0x1026c00 > mfn_or_gmfn 0x0 flags 0x0 domid 1 > (XEN) __dom0vp_add_physmap:d 0xf000000007bf0080 domid 0 gpfn 0x1027000 > mfn_or_gmfn 0x0 flags 0x0 domid 1 > (XEN) mfn=0x000000000102003d, old_mfn=0x0000000001020001 > (XEN) assign_domain_page_replace: old_mfn->count_info=1155 > (XEN) assign_domain_page_replace: mfn->count_info=4 > (XEN) assign_domain_page_replace: (domain_id of old_mfn)=32760 > (XEN) assign_domain_page_replace: (domain_id of mfn)=1 > (XEN) > (XEN) Call Trace: > (XEN) [<f0000000040ab630>] show_stack+0x80/0xa0 > (XEN) sp=f000000007bcfc10 bsp=f000000007bc9350 > (XEN) [<f00000000406fab0>] assign_domain_page_replace+0x2d0/0x3d0 > (XEN) sp=f000000007bcfde0 bsp=f000000007bc92f8 > (XEN) [<f000000004070e80>] __dom0vp_add_physmap+0x370/0x660 > (XEN) sp=f000000007bcfde0 bsp=f000000007bc9298 > (XEN) [<f0000000040524e0>] do_dom0vp_op+0x1e0/0x4d0 > (XEN) sp=f000000007bcfdf0 bsp=f000000007bc9258 > (XEN) [<f000000004002e30>] fast_hypercall+0x170/0x340 > (XEN) sp=f000000007bcfe00 bsp=f000000007bc9258 > > > - If so, what's pseudo physical address of dom0? > > I guess it is in p2m table table area. Is this correct? > > You can find the area by cat /proc/iomem | grep 'Xen p2m table' > > or finding dom0 kernel boot message like the following > > > > > Xen p2m: to [0x0000000300000000, 0x0000000303ffc000) (65536 KBytes) > > > >If the above guess is correct, then destroying domain shouldn't cause panic. > >Can you check mfn which causes panic? > > > No, the address is not in p2m table. > > hex(0x1027000 << 14)==> '0x409c000000' > # cat /proc/iomem |grep Xen > 4094000000-4097ffffff : Xen p2m table > > > Best Regards, > > Akio Takebe > _______________________________________________ > Xen-ia64-devel mailing list > Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-ia64-devel -- yamahata Attachment:
15166_27cd876f9394_fix_one_more_zero_p2m_page.patch _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |