[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] Fix save/restore for HVM domains with viridian=1
Keir, Yes, just ditch the kernel change. Paul > -----Original Message----- > From: Keir Fraser [mailto:keir.xen@xxxxxxxxx] > Sent: 25 November 2011 15:40 > To: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxx > Subject: Re: [Xen-devel] [PATCH] Fix save/restore for HVM domains > with viridian=1 > > On 25/11/2011 15:30, "Paul Durrant" <paul.durrant@xxxxxxxxxx> wrote: > > > # HG changeset patch > > # User Paul Durrant <paul.durrant@xxxxxxxxxx> # Date 1322235040 0 > # > > Node ID 2a8ef49f60cfe5a6c4c9f434af472ab58a125a7a > > # Parent 0a0c02a616768bfab16c072788cb76be1893c37f > > Fix save/restore for HVM domains with viridian=1 > > > > xc_domain_save/restore currently pay no attention to > > HVM_PARAM_VIRIDIAN which results in an HVM domain running a recent > > version on Windows (post-Vista) locking up on a domain restore due > to > > EOIs (done via a viridian MSR write) being silently dropped. > > This patch adds an extra save entry for the viridian parameter. > > Ok, I did check in the previous with the intention of fixing up the > Xen bits. Is the best fix now agreed to just remove the Xen bits? :- > ) I can check in a patch to do that if so. > > -- Keir > > > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> > > > > diff -r 0a0c02a61676 -r 2a8ef49f60cf > tools/libxc/xc_domain_restore.c > > --- a/tools/libxc/xc_domain_restore.c Mon Nov 21 21:28:34 2011 > +0000 > > +++ b/tools/libxc/xc_domain_restore.c Fri Nov 25 15:30:40 2011 > +0000 > > @@ -675,6 +675,7 @@ typedef struct { > > uint64_t vm86_tss; > > uint64_t console_pfn; > > uint64_t acpi_ioport_location; > > + uint64_t viridian; > > } pagebuf_t; > > > > static int pagebuf_init(pagebuf_t* buf) @@ -809,6 +810,16 @@ > static > > int pagebuf_get_one(xc_interface > > } > > return pagebuf_get_one(xch, ctx, buf, fd, dom); > > > > + case XC_SAVE_ID_HVM_VIRIDIAN: > > + /* Skip padding 4 bytes then read the viridian flag. */ > > + if ( RDEXACT(fd, &buf->viridian, sizeof(uint32_t)) || > > + RDEXACT(fd, &buf->viridian, sizeof(uint64_t)) ) > > + { > > + PERROR("error read the viridian flag"); > > + return -1; > > + } > > + return pagebuf_get_one(xch, ctx, buf, fd, dom); > > + > > default: > > if ( (count > MAX_BATCH_SIZE) || (count < 0) ) { > > ERROR("Max batch size exceeded (%d). Giving up.", > count); > > @@ -1440,6 +1451,9 @@ int xc_domain_restore(xc_interface *xch, > > fcntl(io_fd, F_SETFL, orig_io_fd_flags | O_NONBLOCK); > > } > > > > + if (pagebuf.viridian != 0) > > + xc_set_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN, 1); > > + > > if (pagebuf.acpi_ioport_location == 1) { > > DBGPRINTF("Use new firmware ioport from the > checkpoint\n"); > > xc_set_hvm_param(xch, dom, > HVM_PARAM_ACPI_IOPORTS_LOCATION, > > 1); diff -r 0a0c02a61676 -r 2a8ef49f60cf > tools/libxc/xc_domain_save.c > > --- a/tools/libxc/xc_domain_save.c Mon Nov 21 21:28:34 2011 +0000 > > +++ b/tools/libxc/xc_domain_save.c Fri Nov 25 15:30:40 2011 +0000 > > @@ -1506,6 +1506,18 @@ int xc_domain_save(xc_interface *xch, in > > PERROR("Error when writing the firmware ioport > version"); > > goto out; > > } > > + > > + chunk.id = XC_SAVE_ID_HVM_VIRIDIAN; > > + chunk.data = 0; > > + xc_get_hvm_param(xch, dom, HVM_PARAM_VIRIDIAN, > > + (unsigned long *)&chunk.data); > > + > > + if ( (chunk.data != 0) && > > + wrexact(io_fd, &chunk, sizeof(chunk)) ) > > + { > > + PERROR("Error when writing the viridian flag"); > > + goto out; > > + } > > } > > > > if ( !callbacks->checkpoint ) > > diff -r 0a0c02a61676 -r 2a8ef49f60cf tools/libxc/xg_save_restore.h > > --- a/tools/libxc/xg_save_restore.h Mon Nov 21 21:28:34 2011 +0000 > > +++ b/tools/libxc/xg_save_restore.h Fri Nov 25 15:30:40 2011 +0000 > > @@ -134,6 +134,7 @@ > > #define XC_SAVE_ID_HVM_CONSOLE_PFN -8 /* (HVM-only) */ > > #define XC_SAVE_ID_LAST_CHECKPOINT -9 /* Commit to restoring > after > > completion of current iteration. */ > > #define XC_SAVE_ID_HVM_ACPI_IOPORTS_LOCATION -10 > > +#define XC_SAVE_ID_HVM_VIRIDIAN -11 > > > > /* > > ** We process save/restore/migrate in batches of pages; the below > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@xxxxxxxxxxxxxxxxxxx > > http://lists.xensource.com/xen-devel > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |