|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/2] x86: remove X86_INTEL_USERCOPY code
On 30/08/13 03:14, Matt Wilson wrote:
> Nothing defines CONFIG_X86_INTEL_USERCOPY, and as far as I can tell it
> was never used even when Xen supported 32-bit x86.
>
> Signed-off-by: Matt Wilson <msw@xxxxxxxxxx>
And furthermore, turning it on would appear to result in a compile error
as movsl_mask doesn't appear to exist anywhere, certainly nowhere I can
find in the current tree.
Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> ---
> xen/arch/x86/cpu/intel.c | 21 ---------------------
> 1 files changed, 0 insertions(+), 21 deletions(-)
>
> diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
> index 9b71d36..072ecbc 100644
> --- a/xen/arch/x86/cpu/intel.c
> +++ b/xen/arch/x86/cpu/intel.c
> @@ -18,13 +18,6 @@
>
> #define select_idle_routine(x) ((void)0)
>
> -#ifdef CONFIG_X86_INTEL_USERCOPY
> -/*
> - * Alignment at which movsl is preferred for bulk memory copies.
> - */
> -struct movsl_mask movsl_mask __read_mostly;
> -#endif
> -
> static unsigned int probe_intel_cpuid_faulting(void)
> {
> uint64_t x;
> @@ -229,20 +222,6 @@ static void __devinit init_intel(struct cpuinfo_x86 *c)
> /* Work around errata */
> Intel_errata_workarounds(c);
>
> -#ifdef CONFIG_X86_INTEL_USERCOPY
> - /*
> - * Set up the preferred alignment for movsl bulk memory moves
> - */
> - switch (c->x86) {
> - case 6: /* PII/PIII only like movsl with 8-byte alignment */
> - movsl_mask.mask = 7;
> - break;
> - case 15: /* P4 is OK down to 8-byte alignment */
> - movsl_mask.mask = 7;
> - break;
> - }
> -#endif
> -
> if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
> (c->x86 == 0x6 && c->x86_model >= 0x0e))
> set_bit(X86_FEATURE_CONSTANT_TSC, c->x86_capability);
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |