[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-ia64-devel] [PATCH] tlb miss handler


  • To: "Isaku Yamahata" <yamahata@xxxxxxxxxxxxx>, <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
  • Date: Mon, 20 Feb 2006 23:35:02 +0800
  • Delivery-date: Mon, 20 Feb 2006 15:48:00 +0000
  • List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
  • Thread-index: AcYol3J04aa7QAMnTBCXKiC5uveKsQNmOrFQ
  • Thread-topic: [Xen-ia64-devel] [PATCH] tlb miss handler

>From: Isaku Yamahata
>Sent: 2006年2月3日 15:56
>
>I updated this patch for commit following the comments.
>I didn't optimize it and leave it to the future tuning phase.
>
>
>--
>Yamahata

Hi, Yamahata-san,
        Good catch here and several comments then:

- "alt itlb miss by a guest must be handled"
        Dom0 runs from the very start with vhpt enabled in all regions. There 
should be no alt itlb miss raised from dom0.

- It's better to copy whole original alt_dtlb_miss logic to dtlb miss handler, 
e.g:
        * Check for psr.cpl, speculation and non-access instructions are only 
tricks used in alt_dtlb_miss handler to ensure coherent memory attribute. So 
the better sequence is to check identity mapped hypervisor area first, and then 
check those three. For normal DTLB miss into other regions, those checks can be 
skipped then.

        * Why did you only handle cacheable (meant 0xf000....) area by identity 
mapping, whole leaving uncacheable (meant 0xe8000...) area to page_fault? That 
would low down the performance a lot.

        Simple way is like VTI domain to jump to alt_dtlb_miss code after 
necessary check upon identity mapping. Or you can merge the two handlers like 
your patch but  please keep necessary logic there. :-)

Thanks,
Kevin


_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.