[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel][PATCH][RFC] Supporting Enlightened Windows 2008Server
On 7/3/08 01:10, "Ky Srinivasan" <ksrinivasan@xxxxxxxxxx> wrote: > The Veridian API allows the guest to pass in a variable list of arguments to > the TLB flush call ( HvFlushVirtualAddressList). Furthermore, both forms of > the flush APIs (HvFlushVirtualAddressSpace and HvFlushVirtualAdressList) can > specify a list of vcpus that should be involved in the flush process. So, as > you have noted we will need a mechanism to co-ordinate the flush operation > amongst the set of vcpus involved which means we need to be able give up the > physical CPU in the hypervisor waiting for the flush to complete. I have used > wait_on_xen_event_channel() to implement this synchronization. Since we don't > preserve the stack state when we block in the hypervisor, I have used a > seperate per-vcpu page for dealing with hypercall input parameters for calls > that can potentially block in the hypervisor. From what I have seen, win2k8 > server mostly specifies all the processors in ProcessorMask. So, I chose to > implement TLB flush operations using a single serialization object that keeps > track of both the set of vcpus involved in the flush operation as well as the > list of pages to be flushed. Clearly avoiding emulating IPI-to-all-CPUs is rather likely to be a win. But is the very selective subset-of-CPUs and subset-of-addresses really that useful? Do you get any significant win over just calling hvmop_flush_tlb_all()? Also we need to weigh up the likely penetration of NPT and EPT capable processors by the time w2k8 is shipping in any volume. But even ignoring that, I bet 95% of the benefit of this patch can be got with a much smaller patch. -- Keir _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |