[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V4 1/3] x86: Make page cache mode a real type
* Juergen Gross <jgross@xxxxxxxx> wrote: > On 10/29/2014 11:28 AM, Ingo Molnar wrote: > > > >* Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote: > > > >>On Mon, 27 Oct 2014, Juergen Gross wrote: > >>>At the moment there are a lot of places that handle setting or getting > >>>the page cache mode by treating the pgprot bits equal to the cache mode. > >>>This is only true because there are a lot of assumptions about the setup > >>>of the PAT MSR. Otherwise the cache type needs to get translated into > >>>pgprot bits and vice versa. > >>> > >>>This patch tries to prepare for that by introducing a separate type > >>>for the cache mode and adding functions to translate between those and > >>>pgprot values. > >>> > >>>To avoid too much performance penalty the translation between cache mode > >>>and pgprot values is done via tables which contain the relevant > >>>information. Write-back cache mode is hard-wired to be 0, all other > >>>modes are configurable via those tables. For large pages there are > >>>translation functions as the PAT bit is located at different positions > >>>in the ptes of 4k and large pages. > >> > >>I'm fine with the approach itself. Though I wish you had split this > >>patch in several sanely to review pieces. > >> > >> - Introduce enums, helper functions etc. which basically reflect > >> the state of today > >> > >> - Change the usage sites to use the enums and helpers > >> > >> - Convert the enum/helper implementation to the new scheme > >> > >> - Split out the printk change > >> > >> ... > > > >That absolutely has to be done, should any of this introduce > >regressions. The diffstat: > > > > 18 files changed, 344 insertions(+), 212 deletions(-) > > > >is _way_ too large. > > Okay, will do. Thanks! Ingo _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |