[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [Xen-ia64-devel] RFC: ptc.ga implementation for SMP-g
- To: "Tristan Gingold" <Tristan.Gingold@xxxxxxxx>, <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
- From: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
- Date: Tue, 4 Apr 2006 23:12:19 +0800
- Delivery-date: Tue, 04 Apr 2006 08:13:04 -0700
- List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
- Thread-index: AcZX+FGZLTUFEZx7QBiRELlmrMFtsAAAIkyw
- Thread-topic: [Xen-ia64-devel] RFC: ptc.ga implementation for SMP-g
>From: Tristan Gingold [mailto:Tristan.Gingold@xxxxxxxx]
>Sent: 2006?4?4? 23:03
>> Yes, resetting the rid impacts the performance, that makes IPI method
>> worse. I have an idea about handling ptc.ga. But I am not sure whether it
>> is feasible.
>> The control flow is as below
>> 1. vcpu1 emulates ptc.ga
>> 2. vcpu1 executes vhpt_flush_address to purge current LP VHPT,
>> and executes ptc.l to purge machine TLB on current LPs.
>> 3. vcpu1 creates a structure which describe this ptc.ga, including virtual
>> address, address range and rid, and connect this structure to vcpu2.
>> 4. then vcpu1 sets a flag in vcpu2, indicating there is ptc.ga executed on
>> this VMM.
>> 5. When vcpu2 traps into hypervisor, hypervisor will check whether this
>> flag is set, if yes, vcpu2 will execute vhpt_flush_address and ptc.l.
> 6. vcpu1 waits for vcpu2 until it has done the job.
>> There is a time window between purges of vcpu1 and vcpu2, I'm not sure
>> whether it is workable.
>The IPI could makes vcpu2 entering into the hypervisor faster.
Yes, but IPI will cause extra save/restore on other vcpus.
And as you said IPI may cause trouble when considering migration.
If above method is feasible, I prefer this one.
>Seems Ok for me.
Xen-ia64-devel mailing list