[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: Linux Stubdom Problem
2011/9/2 Tim Deegan <tim@xxxxxxx>: > At 01:12 +0800 on 02 Sep (1314925970), Jiageng Yu wrote: >> 2011/8/31 Keir Fraser <keir.xen@xxxxxxxxx>: >> > On 29/08/2011 17:03, "Stefano Stabellini" >> > <stefano.stabellini@xxxxxxxxxxxxx> >> > wrote: >> > >> >>> Oh, so it will. ÂYou'd need to arrange for that to be called from inside >> >>> the guest; or you could implement an add_to_physmap space for it; that >> >>> could be called from another domain. >> >> >> >> "From inside the guest" means hvmloader? >> >> The good thing about doing it in hvmloader is that we could use the >> >> traditional PV frontend/backend mechanism to share pages. On the other >> >> hand hvmloader doesn't know if we are using stubdoms at the moment and >> >> it would need to issue the grant table hypercall only in that case. >> >> Unless we decide to always grant the videoram to guests but it would >> >> change once again the domain to which the videoram is accounted for >> >> (dom0/stubdom rather than the guest, that is a bad thing). >> >> Also I don't like the idea of making hvmloader stubdom aware. >> > >> > I don't see a problem with it, in principle. I see hvmloader as almost an >> > in-guest part of the toolstack. The fact that it only executes at guest >> > boot >> > means it can be fairly closely tied to the toolstack version. >> > >> > Â-- Keir >> > >> > >> > >> >> Hi all, >> >>   I report a new issue about vram mapping in linux stubdom. I use >> the follow patch to map the mfn of stubdom into hvm guest. >> >> diff -r 0f36c2eec2e1 xen/arch/x86/mm.c >> --- a/xen/arch/x86/mm.c    Thu Jul 28 15:40:54 2011 +0100 >> +++ b/xen/arch/x86/mm.c    Thu Sep 01 14:52:25 2011 +0100 >> @@ -4663,6 +4665,14 @@ >>       Âpage = mfn_to_page(mfn); >>       Âbreak; >>     Â} >> +    case XENMAPSPACE_mfn: >> +    { >> +       if(!IS_PRIV_FOR(current->domain, d)) >> +           return -EINVAL; >> +       mfn = xatp.idx; >> +       page = mfn_to_page(mfn); >> +       break; >> +    } > > I would really rather not have this interface; I don't see why we can't > use grant tables for this. > Hi, In linux based stubdom case, we want to keep hvm guest and its hvmloader unaware of running on stubdom. Therefore, we do need a way to map vram pages of stubdom into guest hvm transparently. If we use the grant tables for this, the general procedure would be: 1. stubdom allocates vram pages 2. stubdom creates the grant reference for these pages and transfers the GRs to hvm guest(or hvmloader) 3. hvm guest(or hvmloader) maps these pages into its memory space using the GRs 4. stubdom and hvm guest share the same vram memory. In this procedure, the hvm guest(or hvmloader) must know it is running on stubdom. Additionally, if I modified grant table to map pages without any participation of hvm guest(or hvmloader), it will obey the design goals of grant table. So I think grant table may not be suitable for our case. Another idea is to allocate vram in hvm guest and stubdom maps vram pages into its memory space. In this scenario, I must delay the initial process of xen-fbfront in stubdom until qemu populates vram pages for hvm guest (unsing xc_domain_populate_physmap_exact()). Then, I map these pages into stubdom kernel through the function similar to privcmd_ioctl_mmap_batch(). But I would not use privcmd device. If anyone has any advice for me I'm all ears. Thanks. Jiageng Yu. > If you must do it this way, it should check that the MFN is valid and > that it's owned by the caller. > >>     Âdefault: >>       Âbreak; >>     Â} >> @@ -4693,13 +4708,17 @@ >>     Â} >> >>     Â/* Unmap from old location, if any. */ >> -    Âgpfn = get_gpfn_from_mfn(mfn); >> -    ÂASSERT( gpfn != SHARED_M2P_ENTRY ); >> -    Âif ( gpfn != INVALID_M2P_ENTRY ) >> -      Âguest_physmap_remove_page(d, gpfn, mfn, 0); >> +   Âif(xatp.space!=XENMAPSPACE_mfn) { >> +      gpfn = get_gpfn_from_mfn(mfn); >> +      ASSERT( gpfn != SHARED_M2P_ENTRY ); >> +      if ( gpfn != INVALID_M2P_ENTRY ) >> +        guest_physmap_remove_page(d, gpfn, mfn, 0); >> +   Â} > > Why did you make this change? > >> >>     Â/* Map at new location. */ >>     Ârc = guest_physmap_add_page(d, xatp.gpfn, mfn, 0); >> diff -r 0f36c2eec2e1 xen/include/public/memory.h >> --- a/xen/include/public/memory.h   Thu Jul 28 15:40:54 2011 +0100 >> +++ b/xen/include/public/memory.h   Thu Sep 01 14:52:25 2011 +0100 >> @@ -212,6 +212,7 @@ >> Â#define XENMAPSPACE_shared_info 0 /* shared info page */ >> Â#define XENMAPSPACE_grant_table 1 /* grant table page */ >> Â#define XENMAPSPACE_gmfn    Â2 /* GMFN */ >> +#define XENMAPSPACE_mfn     Â3 /* MFN Â*/ >>   Âunsigned int space; >> >> Â#define XENMAPIDX_grant_table_status 0x80000000 >> >> >> I got error at: >> >> arch_memory_op() >>  Â-->case XENMEM_add_to_physmap: >>     Â-->if ( page ) >>        -->put_page(page); >>           -->free_domheap_page(page); >>              Â-->BUG_ON((pg[i].u.inuse.type_info & >> PGT_count_mask) != 0); >> >> In my case, pg[i].u.inuse.type_info & PGT_count_mask =1. > > OK, so you've dropped the last untyped refcount on a page which still > has a type count. ÂThat means the reference counting has got messed up > somewhere. > >> Actually, in the linux based stubdom case, I need to keep these pages >> of vram mapped in qemu of stubdom. But it seems that granting pages >> implies having the pages unmapped in the process that grants them. >> Maybe the grant table could not solve the vram mapping problem. > > But this patch doesn't use the grant tables at all. > > Tim. > > -- > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |