[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 6/6] [RFC] [INCOMPLETE] libxc/restore: Remus



I will try to implement the restore side as you proposed. The save side
was a single do{}while loop already. I use ctx.save.live to differentiate
the two phases.

On 05/12/2015 05:35 PM, Andrew Cooper wrote:
Some basic infrastructure.

CC: Yang Hongyang <yanghy@xxxxxxxxxxxxxx>

---

I started attempting to explain this in text, but concluded that pseudocode
would be easier.

My design for the restore side handling was always based around a single "do {
} while ( not end )" loop, and specifically not to split into two phases.

If I have interpreted your restore patch correctly, then this should be
plausible way to avoid splitting into two phases.  Please feel free to modify
to suit.

If at all possible, I would like to see about simiarly not splitting the save
side into multiple phases, but that is proving far harder, and it is probably
worth waiting for another iteration with current improvements integrated.
---
  tools/libxc/xc_sr_common.h  |    6 ++++++
  tools/libxc/xc_sr_restore.c |   34 ++++++++++++++++++++++++++++++++++
  2 files changed, 40 insertions(+)

diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index c0f90d4..116c89b 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -219,6 +219,12 @@ struct xc_sr_context

              /* Sender has invoked verify mode on the stream. */
              bool verify;
+
+            /* Expecting a checkpointed stream? */
+            bool checkpointed;
+
+            /* Currently buffering records between a checkpoints? */
+            bool buffer_all_records;
          } restore;
      };

diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xc_sr_restore.c
index 53bd674..7a124bd 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xc_sr_restore.c
@@ -468,11 +468,40 @@ static int handle_page_data(struct xc_sr_context *ctx, 
struct xc_sr_record *rec)
      return rc;
  }

+static int handle_checkpoint(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    int rc = 0;
+
+    if ( !ctx->restore.checkpointed )
+    {
+        ERROR("Found checkpoint in non-checkpointed stream\n");
+        return -1;
+    }
+
+    if ( ctx->restore.buffer_all_records )
+    {
+        /* TODO drain buffer */
+    }
+    else
+        ctx->restore.buffer_all_records = 1;
+
+    return rc;
+}
+
  static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
  {
      xc_interface *xch = ctx->xch;
      int rc = 0;

+    if ( ctx->restore.buffer_all_records &&
+         rec->type != REC_TYPE_END &&
+         rec->type != REC_TYPE_CHECKPOINT )
+    {
+        /* TODO: copy into buffer */
+        return 0;
+    }
+
      switch ( rec->type )
      {
      case REC_TYPE_END:
@@ -487,6 +516,10 @@ static int process_record(struct xc_sr_context *ctx, 
struct xc_sr_record *rec)
          ctx->restore.verify = true;
          break;

+    case REC_TYPE_CHECKPOINT:
+        rc = handle_checkpoint(ctx);
+        break;
+
      default:
          rc = ctx->restore.ops.process_record(ctx, rec);
          break;
@@ -592,6 +625,7 @@ int xc_domain_restore2(xc_interface *xch, int io_fd, 
uint32_t dom,
      ctx.restore.console_domid = console_domid;
      ctx.restore.xenstore_evtchn = store_evtchn;
      ctx.restore.xenstore_domid = store_domid;
+    ctx.restore.checkpointed = checkpointed_stream;
      ctx.restore.callbacks = callbacks;

      IPRINTF("In experimental %s", __func__);


--
Thanks,
Yang.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.