[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel] Patch for PV-on-HVM for IPF



Hi Doi.

Although I haven't dug into its details, I want to check my suspicion.
The relinquishing mm code assumes that there's no racy reference.
So your patch seems too easy and not-SMP-safe to me.
For example, mm->pgd isn't protected.
What do you think?

BTW, I suppose that you also have the another SMP issue related
to PV-on-HVM. What's going on it?

On Wed, Sep 27, 2006 at 06:46:45PM +0900, DOI Tsunehisa wrote:
> Sorry, I forgot to attach the patch.
> 
> DOI Tsunehisa wrote:
> > Hi all,
> > 
> >   Currently, we are testing PV-on-HVM driver for IPF. so, we found
> > a problem that hypervisor crashes during VT-i domain destruction
> > with PV-on-HVM driver.
> > 
> >   The procedure to reproduce this problem are:
> > 
> >    * Create a VT-i domain.
> >      [dom0]# xm create -f hvm.conf
> >    * Insert modules for PV-on-HVM on the domain.
> >      [domvti]# insmod xen-platform-pci.ko
> >      [domvti]# insmod xenbus.ko
> >      [domvti]# insmod xen-vbd.ko
> >      [domvti]# insmod xen-vnif.ko
> >    * Activate VNIF network interface.
> >      [dom0]# xm network-attach <domid> eth1
> >      [domvti]# ifconfig eth1 up
> >    * Destruct the domain
> >      [dom0]# xm destroy <domid>
> >      ..... hypervisor crash  ....
> > 
> >   We were investigating the hypervisor crash mechanism, thus we found
> > a reason of it.
> > 
> >   * VNIF of PV-on-HVM uses gnttab_copy() function.
> >     - it's used for copy-method of VNIF backend.
> >     - it needs to refer p2m table.
> >   * gnttab_copy() function may be called during the domain destruction.
> >     - it's called by dom0 VNIF backend.
> >   * Hypervisor invalidates p2m table at first during the destruction.
> >     - gnttab_copy() won't be able to refer p2m table, thus it occurs
> >       hypervisor crash.
> >     - In x86 code, p2m table invalidation is delayed at last phase of
> >       domain destruction.
> >       => but we don't like it. it seems that p2m table exists after
> >          memory relinquised.
> > 
> >   So, I'll post a patch to avoid hypervisor crash.
> > 
> >    * pv-avoid-crash.patch
> >      + modify p2m table converter (gmfn_to_mfn_foreign())
> >        - return INVALID_MFN during VT-i domain destruction
> > 
> >   We tested that the problem is solved.
> > 
> > Thanks,
> > - Tsunehisa Doi
> > 
> > 
> > _______________________________________________
> > Xen-ia64-devel mailing list
> > Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-ia64-devel

> # HG changeset patch
> # User Doi.Tsunehisa@xxxxxxxxxxxxxx
> # Node ID 354f40902eb9e3f92fdf8b5e9268052cd955230b
> # Parent  f34e37d0742d80ccfefd017a91f93310ebc2dfe8
> Modify p2m converter to avoid hypervisor crash
> during destruction of VT-i domain with PV-on-HVM.
> 
> Signed-off-by: Tsunehisa Doi <Doi.Tsunehisa@xxxxxxxxxxxxxx>
> Signed-off-by: Tomonari Horikoshi <t.horikoshi@xxxxxxxxxxxxxx>
> 
> diff -r f34e37d0742d -r 354f40902eb9 xen/arch/ia64/xen/mm.c
> --- a/xen/arch/ia64/xen/mm.c  Tue Sep 26 19:11:33 2006 -0600
> +++ b/xen/arch/ia64/xen/mm.c  Wed Sep 27 17:24:58 2006 +0900
> @@ -396,6 +396,12 @@ gmfn_to_mfn_foreign(struct domain *d, un
>  {
>       unsigned long pte;
>  
> +     // This function may be called from __gnttab_copy()
> +     // during destruction of VT-i domain with PV-on-HVM driver.
> +     if (unlikely(d->arch.mm.pgd == NULL)) {
> +             if (VMX_DOMAIN(d->vcpu[0]))
> +                     return INVALID_MFN;
> +     }
>       pte = lookup_domain_mpa(d,gpfn << PAGE_SHIFT, NULL);
>       if (!pte) {
>               panic("gmfn_to_mfn_foreign: bad gpfn. spinning...\n");

> _______________________________________________
> Xen-ia64-devel mailing list
> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-ia64-devel

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.