[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: [GIT PULL] xen /proc/mtrr implementation
On 05/19/09 15:31, Ingo Molnar wrote: * Gerd Hoffmann<kraxel@xxxxxxxxxx> wrote:On 05/19/09 14:26, Ingo Molnar wrote:* Gerd Hoffmann<kraxel@xxxxxxxxxx> wrote:On 05/19/09 13:08, Ingo Molnar wrote:Or, alternatively, the hypervisor can expose its own administrative interface to manage MTRRs.Guess what? Xen does exactly that. And the xen mtrr_ops implementation uses that interface ...No, that is not an 'administrative interface' - that is a guest kernel level hack that complicates Linux, extends its effective ABI dependencies and which has to be maintained there from that point on. There's really just three proper technical solutions here: - either catch the lowlevel CPU hw ops (the MSR modifications, which isnt really all that different from the mtrr_ops approach so it shouldnt pose undue difficulties to the Xen hypervisor).Devil is in the details. The dom0 kernel might not see all physical cpus on the system. So Xen can't leave the job of looping over all cpus to the dom0 kernel, Xen has to apply the changes made by the (priviledged) guest kernel on any (virtual) cpu to all (physical) cpus in the machine.Applying MTRR changes to only part of the CPUs is utter madness. Sure. Do you read what I'm writing? Which in turn means the "lowlevel cpu hw op" would work in a slightly different way on Xen and native. Nasty.That will be maximally transparent and compatible, with zero changes needed to the Linux kernel.No, the linux kernel probably should do the wrmsr on one cpu only then.Why? See above. Xen has to apply the changes to all cpus anyway. Oops, the third "proper technical solutions" is missing.Yeah, the third one is to not touch MTRRs after bootup and use PAT. Works only in case the CPU has PAT support. cheers, Gerd _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |