[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-ia64-devel] RE: how to put kernel module in xen/ipf


  • To: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
  • From: "Magenheimer, Dan (HP Labs Fort Collins)" <dan.magenheimer@xxxxxx>
  • Date: Wed, 22 Jun 2005 06:39:05 -0700
  • Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 22 Jun 2005 13:38:28 +0000
  • List-id: DIscussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
  • Thread-index: AcV2H2lMV/sz2iLUSG2pTMbU1moFDwASkl6wABeprlAAGW78cA==
  • Thread-topic: how to put kernel module in xen/ipf

ï
This describes the mechanics for the call but there are several things I don't understand,
especially after seeing the patch:
 
1) Why would these modules be in Xen at all?  They appear to be (at least very similar to) the
    code from Domain0.  Why is the functionality duplicated from Domain0?  Are you
    trying to eliminate Domain0?
2) How do these modules get dynamically "loaded"?  Xen/ia64 currently has no disk
    access or any driver support; all bits are currently put in memory at boot time by elilo.
    (And if they are not dynamically loaded, why are they "kernel modules"?)
3) Is there any similarity between this and Xen/x86(VTx)?  I looked for similar code/mechanism
   on Xen/x86 to help me understand your intent but didn't find anything.
 
Unless I'm completely misunderstanding, this seems to be heading in the direction
of major architectural departures from the direction of Xen.  I think we should have some
more discussion before applying these "km" patches.
 
Could you provide an architectural overview of what you are trying to do here rather
than just the mechanics of how the VTLB and hypercall work?

Thanks,
Dan


From: Xu, Anthony [mailto:anthony.xu@xxxxxxxxx]
Sent: Tuesday, June 21, 2005 7:57 PM
To: Magenheimer, Dan (HP Labs Fort Collins)
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Subject: RE: how to put kernel module in xen/ipf

Hi, Dan,

 

We implemented VTLB infrastructure per domain on XEN/IPF, which tracked guest tlb information. VTLB has fixed size, then when using out of it, HV will desert all VTLB and recycle VTLB. We add a flag âlockâ in VTLB entry, when HV recycle VTLB, the VTLB entry with âlockâ flag will not be deserted, but if Guest uses instructions like âptcâ to purge tlb, the VTLB entry with âlockâ flag will be deserted.

 

Before invoking hypercall,

Kernel Module pseudo read parameter once per page size to make sure the translation for this parameter has been inserted in VTLB infrastructure. Then kernel Module will call another new hypercall which donât need pointer parameter to lock above translation in VTLB infrastructure.

 

Then invoking this hypercall,

In HV, HV will use copy_from_user() or copy_to_user() to get parameter or return result. In these functions, HV will lookup VTLB infrastructure to get corresponding guest physical address of the parameter, because this translation has been locked in VTLB, HV definitely can find it, then HV can get corresponding machine address from physical to machine address table, as we know HV use region 7 for identity mapping, HV can get identity virtual address for that machine address, at last, HV do normal copy operation using this identity virtual address.

 

After this hypercall,

Guest application definitely will unmap the memory allocated for passing hypercall parameter, and this operation definitely will purge tlb for this address, so the âlockâ VTLB entry in VTLB infrastructure can be recycled.

 

We had tested this parameter passing mechanism for several hypercalls, such as GETMEMLIST, and it works well.

 

Could we check in this patch and discuss further?

 

 

 

 

-Anthony


From: Magenheimer, Dan (HP Labs Fort Collins) [mailto:dan.magenheimer@xxxxxx]
Sent: 2005
å6æ21æ 22:04
To: Xu, Anthony
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Subject: RE: how to put kernel module in xen/ipf

 

Yes, the hypercall parameter mechanism is still evolving for Xen/ia64.  Very few

hypercalls are necessary to run domain0, so experimentation with different hypercall

mechanisms has waited until multi-domain work.

 

Can you explain more about kernel modules?  I know (roughly) how they

work for Linux, but not how they are used on Xen/x86.  Others on this

list might like to learn too, so perhaps you could explain the design

in detail?


Thanks,
Dan

 


From: Xu, Anthony [mailto:anthony.xu@xxxxxxxxx]
Sent: Monday, June 20, 2005 11:10 PM
To: Magenheimer, Dan (HP Labs Fort Collins)
Cc: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Subject: how to put kernel module in xen/ipf

Hi, Dan,       

XEN/IPF kernel module has a lot of difference from XEN/ia32, especially the mechanism of passing hypercall parameter. Currently we create directory âkmâ under xen/arch/ia64, and put kernel module code in that directory.

        Any comment?

-Anthony

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.