[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND v5 1/6] xen/arm: Implement hvm save and restore



12.11.2013 19:15, Ian Campbell  wrote:
On Fri, 2013-11-08 at 16:50 +0900, Jaeyong Yoo wrote:

If these get resent again please can you CC the ARM maintainers.
$ git grep -A5 ARM MAINTAINERS
MAINTAINERS:ARM (W/ VIRTUALISATION EXTENSIONS) ARCHITECTURE
MAINTAINERS-M:  Ian Campbell <ian.campbell@xxxxxxxxxx>
MAINTAINERS-M:  Stefano Stabellini <stefano.stabellini@xxxxxxxxxx>
MAINTAINERS-M:  Tim Deegan <tim@xxxxxxx>
MAINTAINERS-S:  Supported
MAINTAINERS-L:  xen-devel@xxxxxxxxxxxxx

From: Evgeny Fedotov <e.fedotov@xxxxxxxxxxx>

Implement save/restore of hvm context hypercall.
In hvm context save/restore, we save gic, timer and vfp registers.

Changes from v4: Save vcpu registers within hvm context, and purge
the save-vcpu-register patch.
The inter-version changelog should go after the main commit message and
the S-o-b separated by a "---" on a line by itself. This way it is not
included in the final commit.
OK.
[...]
+static int vgic_irq_rank_save(struct vgic_rank *ext,
+                               struct vgic_irq_rank *rank)
+{
+    spin_lock(&rank->lock);
This is probably wise, but the domain must be paused at this point,
right?
I think this lock can be removed, thanks.

+    /* Some of VGIC registers are not used yet, it is for a future usage */
+    /* IENABLE, IACTIVE, IPEND,  PENDSGI registers */
+    ext->ienable = rank->ienable;
+    ext->iactive = rank->iactive;
+    ext->ipend = rank->ipend;
+    ext->pendsgi = rank->pendsgi;
+    /* ICFG */
+    ext->icfg[0] = rank->icfg[0];
+    ext->icfg[1] = rank->icfg[1];
+    /* IPRIORITY */
+    if ( sizeof(rank->ipriority) != sizeof (ext->ipriority) )
+    {
+        dprintk(XENLOG_G_ERR, "hvm_hw_gic: check ipriority dumping space\n");
What makes ipriority (and itargets below) special enough to warrant such
a check compared with all the other fields?

Could these checks be BUILD_BUG_ON?
If we use memcpy I think we should always check if the size of source and destination is equal.
Of course,  BUILD_BUG_ON is safe so I can  use it. Thanks.

+        return -EINVAL;
+    }
+    memcpy(ext->ipriority, rank->ipriority, sizeof(rank->ipriority));
+    /* ITARGETS */
+    if ( sizeof(rank->itargets) != sizeof (ext->itargets) )
+    {
+        dprintk(XENLOG_G_ERR, "hvm_hw_gic: check itargets dumping space\n");
+        return -EINVAL;
+    }
+    memcpy(ext->itargets, rank->itargets, sizeof(rank->itargets));
+    spin_unlock(&rank->lock);
+    return 0;
+}
+
+static int gic_save(struct domain *d, hvm_domain_context_t *h)
+{
+    struct hvm_hw_gic ctxt;
+    struct vcpu *v;
+
+    /* Save the state of GICs */
+    for_each_vcpu( d, v )
Where is the GICD state saved then?
The only GICD structure we save for guest domain is struct vgic_irq_rank, it includes: IENABLE, IACTIVE, IPEND, PENDSGI, ICFG, IPRIORITY, ITARGETS registers. We create the same structure inside hvm : vgic_rank (that is no guaranteed to be the same as struct vgic_irq_rank) and save it calling vgic_irq_rank_save routine below in gic_save.

+    {
+        ctxt.gic_hcr = v->arch.gic_hcr;
+        ctxt.gic_vmcr = v->arch.gic_vmcr;
+        ctxt.gic_apr = v->arch.gic_apr;
+
+        /* Save list registers and masks */
+        /* (it is not necessary to save/restore them, but LR state can have
+         * influence on downtime after Live Migration (to be tested)
What does this mean?

I'm in two minds about whether LR counts as architectural state, since
it is hypervisor side and not guest facing. I'd have thought that the LR
content could be reconstructed from the pending and active bitmasks.
Well, I tried to save LR to restore all interrupt queues (pending and active) in future to minimize the quantity of lost IRQs during the migration. But in this stage it is quite wrong to save LR so I remove this.

+         */
+        if ( sizeof(v->arch.gic_lr) > sizeof (ctxt.gic_lr) )
+        {
+             dprintk(XENLOG_G_ERR, "hvm_hw_gic: increase LR dumping space\n");
+             return -EINVAL;
+        }
+        memcpy(ctxt.gic_lr, v->arch.gic_lr, sizeof(v->arch.gic_lr));
+        ctxt.lr_mask = v->arch.lr_mask;
+        ctxt.event_mask = v->arch.event_mask;
These last two in particular make me think that saving the LRs is wrong.
OK

+        /* Save PPI states (per-CPU) */
+        /* It is necessary if SMP enabled */
+        if ( vgic_irq_rank_save(&ctxt.ppi_state,  &v->arch.vgic.private_irqs) )
+            return 1;
+
+        if ( hvm_save_entry(GIC, v->vcpu_id, h, &ctxt) != 0 )
+            return 1;
+    }
+    return 0;
+}
+
+static int gic_load(struct domain *d, hvm_domain_context_t *h)
+{
+    int vcpuid;
+    struct hvm_hw_gic ctxt;
+    struct vcpu *v;
[...]
+    return 0;
+}
+
+HVM_REGISTER_SAVE_RESTORE(GIC, gic_save, gic_load, 1, HVMSR_PER_VCPU);
+
[...]
+#ifdef CONFIG_ARM_32
+        ctxt.mair0 = v->arch.mair0;
+        ctxt.mair1 = v->arch.mair1;
+#else
+        ctxt.mair0 = v->arch.mair;
+#endif
+        /* Control Registers */
+        ctxt.actlr = v->arch.actlr;
+        ctxt.sctlr = v->arch.sctlr;
+        ctxt.cpacr = v->arch.cpacr;
[...]
+        ctxt.x0 = c.x0;
Would it make sense to include a field of type vcpu_guest_core_regs in
the save struct instead of this big list?
Yes, I think vcpu_guest_core_regs is already common definition for ARM32 and ARM64.

[...]

+        #ifdef CONFIG_ARM_32
#ifdef in the first columns please.
OK

+                ctxt.spsr_fiq = c.spsr_fiq;
+                ctxt.spsr_irq = c.spsr_irq;
+                ctxt.spsr_und = c.spsr_und;
+                ctxt.spsr_abt = c.spsr_abt;
and don't indent these as if they were inside a block
Sure

You had this right when you saved the MAIR registers above.

+        #endif
+        #ifdef CONFIG_ARM_64
or #else if you prefer.
OK

+                ctxt.sp_el0 = c.sp_el0;
+                ctxt.sp_el1 = c.sp_el1;
+                ctxt.elr_el1 = c.elr_el1;
+        #endif
+
+        /* check VFP state size before dumping */
+        if ( sizeof(v->arch.vfp) > sizeof (ctxt.vfp) )
+        {
+            dprintk(XENLOG_G_ERR, "hvm_hw_cpu: increase VFP dumping space\n");
+            return -EINVAL;
+        }
+        memcpy((void*) &ctxt.vfp, (void*) &v->arch.vfp, sizeof(v->arch.vfp));
+
+        ctxt.pause_flags = v->pause_flags;
x86 has at the top of the loop:
         /* We don't need to save state for a vcpu that is down; the restore
          * code will leave it down if there is nothing saved. */
         if ( test_bit(_VPF_down, &v->pause_flags) )
             continue;

which I think is preferable to this.
OK

diff --git a/xen/include/asm-arm/hvm/support.h 
b/xen/include/asm-arm/hvm/support.h
new file mode 100644
index 0000000..8311f2f
--- /dev/null
+++ b/xen/include/asm-arm/hvm/support.h
@@ -0,0 +1,29 @@
+/*
+ * support.h: HVM support routines used by ARMv7 VE.
Not just on Vexpress nor just on v7 I expect?
VE means Virtualization Extentions here.
OK, we should support ARM v8 in future,so can be "HVM support routines used by ARMv7/v8 with Virtualization Extensions" a good comment?
+ *
+ * Copyright (c) 2012, Citrix Systems
Really?
OK
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#ifndef __ASM_ARM_HVM_SUPPORT_H__
+#define __ASM_ARM_HVM_SUPPORT_H__
+
+#include <xen/types.h>
+#include <public/hvm/ioreq.h>
+#include <xen/sched.h>
+#include <xen/hvm/save.h>
+#include <asm/processor.h>
Just some #includes here?
Yes, I remove unnecessary includes.

Ian,


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.