[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] Windows SMP
> > Is it possible for a virtualised DomU to trap the MMIO write itself, or > > can it only be trapped by the hypervisor? > > It could, by trapping all accesses to the APIC page (remove mapping, hook > page-fault handler). It's going to be much easier to get help from the > hypervisor! Hmmm... maybe easier with the hypervisor but by trapping writes I could do it all in my drivers. Assuming the hypervisor route, if I understand correctly it could go like this: . develop a mechanism for qemu's TPR write emulation to also signal my drivers somehow. This signalling mechanism would have to be low overhead, but could also be pretty lazy (if these TPR writes are happening often then it doesn't matter if we miss a few as we'll patch them all pretty quickly) . the PV drivers get the signal, perform some validation on the instruction that caused the TPR write to make sure it's patchable, and then overwrite the instruction with a call to my own routine, making sure that this is done in an SMP safe way and taking care of caching etc... . My own routine does ??? (I haven't figured this out exactly, but there is plenty of code out there that does it already) Thinking about this some more... could it work the other way around instead: . My drivers publish to qemu (somehow... io space write maybe???) the required patch for the mmio write instruction, eg a patch request might look like: <01> (magic number indicating a TPR write patch) <05> (length) <01><02><03><04><05> (code to replace the TPR write instruction with) . Qemu stores the above in a table, and every time an mmio write instruction for the TPR register occurs which is exactly 5 bytes in length, patch the instruction with the code given > > Btw, is it the vmexit that is slow about these TPR writes, or is it the > > writes themselves? > > Both. As well as the VMEXIT you have a run through the instruction > emulator > and into the apic device model. It sucks pretty bad if you do it often. I see. It's the sum of a whole load of tiny overheads that adds up... Thanks James _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |