[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-ia64-devel] [PATCH] per vcpu vhpt
Hi Yamahata, Sorry for late response. This is a great patch, 1. This patch should alleviate pain from migration. 2. VTI-domain and domU now use the same TR number for VHPT mapping, there is one more TC available to use, and that's good. A very small comment, I noticed DomU still use VHPT_ADDR to map VHPT. While in function, for instance, __vhpt_flush, it uses __va(vhpt_maddr) to access VHPT, because vhpt is allocated from dom heap, this may cause unnecessary tlb miss. There are two natural options here. 1. If DomU use __va(vhpt_maddr) to map VHPT, the unnecessary tlb miss can be eliminated. 2. in function like __vhpt_flush, use VHPT_ADDR to access VHPT. One more thinking is, seems we plan to expose p2m table to domU, in this case, if we use VHPT_ADDR to map VHPT, itc can be "emulated" inside domU, when domU want to insert a mapping, it can look up p2m for machine physical address, then insert the transferred mapping into VHPT directly. BTW, this issue is not introduced by this patch; it has been there for long time. Thanks, Anthony >-----Original Message----- >From: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx >[mailto:xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Isaku Yamahata >Sent: 2006年10月6日 17:14 >To: xen-ia64-devel@xxxxxxxxxxxxxxxxxxx >Subject: [Xen-ia64-devel] [PATCH] per vcpu vhpt > > >per vcpu vhpt >implement per vcpu vhpt option. allocate VHPT per vcpu. >added compile time option, xen_ia64_pervcpu_vhpt=y, to enable it. >Its default is on. >added xen boot time option, pervcpu_vhpt=0, to disable it. > >This patch focuses on vcpu migration between physical cpus >becaseu vcpu is heavily migrated with credit scheduler. >This patch tries to reduce vTLB flush when vcpu is migrated > >-- >yamahata _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |