[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 4/6] xen/x86/pvh: Set up descriptors for 32-bit PVH guests



On 07/17/2015 12:43 PM, Konrad Rzeszutek Wilk wrote:
On Fri, Jul 17, 2015 at 11:36:29AM -0400, Boris Ostrovsky wrote:
On 07/17/2015 11:21 AM, Konrad Rzeszutek Wilk wrote:
On Thu, Jul 16, 2015 at 05:43:39PM -0400, Boris Ostrovsky wrote:
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
---
Changes in v2:
* Set segment selectors using loadsegment() instead of assembly

  arch/x86/xen/enlighten.c | 15 ++++++++++-----
  1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index f8dc398..d665b1d 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1362,12 +1362,12 @@ static void __init xen_boot_params_init_edd(void)
  static void __ref xen_setup_gdt(int cpu)
  {
        if (xen_feature(XENFEAT_auto_translated_physmap)) {
-#ifdef CONFIG_X86_64
-               unsigned long dummy;
+               unsigned long __attribute__((unused)) dummy;
-               load_percpu_segment(cpu); /* We need to access per-cpu area */
You removed that - where are we going to do that? As the
'switch_to_new_gdt' uses the per-cpu GDT table.
load_percpu_segment() is part of switch_to_new_gdt(), so I thought there is
no need to call it here.

But you are right --- switch_to_new_gdt() starts with get_cpu_gdt_table()
which accesses per-CPU area. How did this manage to work then?
I was surprised as well - I was expecting your patch to have blow up.
Unless we are doing something fancy for CPU0 and for the other CPUs we
already have the per-cpu segment setup during bootup (copied from BSP)?


No, %fs is zero when we enter xen_setup_gdt() (for 32-bit).

In any case, I should put load_percpu_segment() back.

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.