[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-ia64-devel] [Q] about assign_domain_page_replace
Hi, Isaku >On Wed, Jun 06, 2007 at 03:16:25PM +0900, Akio Takebe wrote: > >> >"mfn != old_mfn" itself isn't a bug of Xen VMM. >> >It should be okay from the hypervisor point of view. >> >In both case (== and !=), Xen VMM should continue to work finely. >> >domain_put_page() makes mfn_to_page(old_mfn)->count_info = 0 and >> >frees the page. >> No, get_page() is not called for the page. >> So, after domain_put_page(), the page->count_info=-1. > >You meant page->count_info == 0 before calling domain_put_page(). >Let me confirm the followings >- Is the page owner is NULL? >- Does dom0 call __dom0vp_add_physmap()? >- The page doesn't belong to dom0. But the page is assigned to > another non-dom0 domain's pseudo physical address. > When the domain is destroyed, Xen panics. Is this right? > >domain_page_flush_and_put() isn't aware of NULL-owner pages, I'll fix it. >However more issues seem to be there. > Thank you. I use the following patch. diff -r 0cf6b75423e9 xen/arch/ia64/xen/mm.c --- a/xen/arch/ia64/xen/mm.c Mon Jun 04 14:17:54 2007 -0600 +++ b/xen/arch/ia64/xen/mm.c Wed Jun 06 18:04:59 2007 +0900 @@ -1150,6 +1150,16 @@ assign_domain_page_replace(struct domain // => create_host_mapping() // => assign_domain_page_replace() if (mfn != old_mfn) { + printk("mfn=0x%016lx, old_mfn=0x%016lx\n", mfn, old_mfn); + printk("%s: old_mfn->count_info=%u\n", __func__, + (u32)(mfn_to_page(old_mfn)->count_info&PGC_count_mask)); + printk("%s: mfn->count_info=%u\n", __func__, + (u32)(mfn_to_page(mfn)->count_info&PGC_count_mask)); + printk("%s: old_mfn->u.inuse._domain=%u\n", __func__, + (u32)(mfn_to_page(old_mfn)->u.inuse._domain)); + printk("%s: mfn->u.inuse._domain=%u\n", __func__, + (u32)(mfn_to_page(mfn)->u.inuse._domain)); + dump_stack(); domain_put_page(d, mpaddr, pte, old_pte, 1); } } The result of booting domU is below. (XEN) domain.c:536: arch_domain_create:536 domain 1 pervcpu_vhpt 1 (XEN) tlb_track.c:69: allocated 256 num_entries 256 num_free 256 (XEN) tlb_track.c:115: hash 0xf000004084af0000 hash_size 512 (XEN) regionreg.c:193: ### domain f0000000040f8080: rid=80000-c0000 mp_rid=2000 (XEN) domain.c:573: arch_domain_create: domain=f0000000040f8080 (XEN) mfn=0x000000000102003d, old_mfn=0x0000000001020001 (XEN) assign_domain_page_replace: old_mfn->count_info=0 (XEN) assign_domain_page_replace: mfn->count_info=4 (XEN) assign_domain_page_replace: old_mfn->u.inuse._domain=0 (XEN) assign_domain_page_replace: mfn->u.inuse._domain=32896 (XEN) (XEN) Call Trace: (XEN) [<f0000000040ab1b0>] show_stack+0x80/0xa0 (XEN) sp=f000000007b27c10 bsp=f000000007b215a8 (XEN) [<f00000000406f750>] assign_domain_page_replace+0x220/0x260 (XEN) sp=f000000007b27de0 bsp=f000000007b21540 (XEN) [<f000000004070a20>] __dom0vp_add_physmap+0x330/0x630 (XEN) sp=f000000007b27de0 bsp=f000000007b214d8 (XEN) [<f0000000040524a0>] do_dom0vp_op+0x1e0/0x4d0 (XEN) sp=f000000007b27df0 bsp=f000000007b21498 (XEN) [<f000000004002e30>] fast_hypercall+0x170/0x340 (XEN) sp=f000000007b27e00 bsp=f000000007b21498 (XEN) vcpu.c:1059:d1 vcpu_get_lrr0: Unmasked interrupts unsupported (XEN) vcpu.c:1068:d1 vcpu_get_lrr1: Unmasked interrupts unsupported (XEN) domain.c:943:d1 Domain set shared_info_va to 0xfffffffffff00000 (XEN) mfn=0x0000000000061632, old_mfn=0x000000000006708f (XEN) assign_domain_page_replace: old_mfn->count_info=1 (XEN) assign_domain_page_replace: mfn->count_info=3 (XEN) assign_domain_page_replace: old_mfn->u.inuse._domain=61177984 (XEN) assign_domain_page_replace: mfn->u.inuse._domain=32896 (XEN) (XEN) Call Trace: (XEN) [<f0000000040ab1b0>] show_stack+0x80/0xa0 (XEN) sp=f000000007b2fbe0 bsp=f000000007b29488 (XEN) [<f00000000406f750>] assign_domain_page_replace+0x220/0x260 (XEN) sp=f000000007b2fdb0 bsp=f000000007b29420 (XEN) [<f000000004070530>] create_grant_host_mapping+0x1d0/0x390 (XEN) sp=f000000007b2fdb0 bsp=f000000007b293b8 (XEN) [<f000000004021110>] do_grant_table_op+0xcb0/0x3350 (XEN) sp=f000000007b2fdc0 bsp=f000000007b292b0 (XEN) [<f000000004002e30>] fast_hypercall+0x170/0x340 (XEN) sp=f000000007b2fe00 bsp=f000000007b292b0 (XEN) mm.c:698:d1 vcpu 0 iip 0xa0000001004fbbc0: bad I/O port access d 1 0x64 (XEN) mfn=0x0000000000061608, old_mfn=0x0000000000062341 (XEN) assign_domain_page_replace: old_mfn->count_info=1 (XEN) assign_domain_page_replace: mfn->count_info=3 (XEN) assign_domain_page_replace: old_mfn->u.inuse._domain=61177984 (XEN) assign_domain_page_replace: mfn->u.inuse._domain=32896 (XEN) (XEN) Call Trace: (XEN) [<f0000000040ab1b0>] show_stack+0x80/0xa0 (XEN) sp=f000000007b2fbe0 bsp=f000000007b294a8 (XEN) [<f00000000406f750>] assign_domain_page_replace+0x220/0x260 (XEN) sp=f000000007b2fdb0 bsp=f000000007b29440 (XEN) [<f000000004070530>] create_grant_host_mapping+0x1d0/0x390 (XEN) sp=f000000007b2fdb0 bsp=f000000007b293d8 (XEN) [<f000000004021110>] do_grant_table_op+0xcb0/0x3350 (XEN) sp=f000000007b2fdc0 bsp=f000000007b292d0 (XEN) [<f000000004002e30>] fast_hypercall+0x170/0x340 (XEN) sp=f000000007b2fe00 bsp=f000000007b292d0 (XEN) mfn=0x000000000000a3ea, old_mfn=0x000000000006668d (XEN) assign_domain_page_replace: old_mfn->count_info=1 (XEN) assign_domain_page_replace: mfn->count_info=3 (XEN) assign_domain_page_replace: old_mfn->u.inuse._domain=61177984 (XEN) assign_domain_page_replace: mfn->u.inuse._domain=32896 (XEN) (XEN) Call Trace: (XEN) [<f0000000040ab1b0>] show_stack+0x80/0xa0 (XEN) sp=f000000007b2fbe0 bsp=f000000007b29450 (XEN) [<f00000000406f750>] assign_domain_page_replace+0x220/0x260 (XEN) sp=f000000007b2fdb0 bsp=f000000007b293e0 (XEN) [<f000000004070530>] create_grant_host_mapping+0x1d0/0x390 (XEN) sp=f000000007b2fdb0 bsp=f000000007b29380 (XEN) [<f000000004021110>] do_grant_table_op+0xcb0/0x3350 (XEN) sp=f000000007b2fdc0 bsp=f000000007b29278 (XEN) [<f000000004002e30>] fast_hypercall+0x170/0x340 (XEN) sp=f000000007b2fe00 bsp=f000000007b29278 Best Regards, Akio Takebe _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |