[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 3/5] shutdown: Add source information to SHUTDOWN and RESET
On Mon, 8 May 2017 16:19:51 -0500 Eric Blake <eblake@xxxxxxxxxx> wrote: > Time to wire up all the call sites that request a shutdown or > reset to use the enum added in the previous patch. > > It would have been less churn to keep the common case with no > arguments as meaning guest-triggered, and only modified the > host-triggered code paths, via a wrapper function, but then we'd > still have to audit that I didn't miss any host-triggered spots; > changing the signature forces us to double-check that I correctly > categorized all callers. > > Since command line options can change whether a guest reset request > causes an actual reset vs. a shutdown, it's easy to also add the > information to reset requests. > > Replay adds a FIXME to preserve the cause across the replay stream, > that will be tackled in the next patch. > > Signed-off-by: Eric Blake <eblake@xxxxxxxxxx> > Acked-by: David Gibson <david@xxxxxxxxxxxxxxxxxxxxx> [ppc parts] > Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@xxxxxxxxxxxx> [SPARC part] > diff --git a/target/s390x/helper.c b/target/s390x/helper.c > index 68bd2f9..d2bb9aa 100644 > --- a/target/s390x/helper.c > +++ b/target/s390x/helper.c > @@ -266,7 +266,7 @@ void load_psw(CPUS390XState *env, uint64_t mask, uint64_t > addr) > S390CPU *cpu = s390_env_get_cpu(env); > if (s390_cpu_halt(cpu) == 0) { > #ifndef CONFIG_USER_ONLY > - qemu_system_shutdown_request(); > + qemu_system_shutdown_request(SHUTDOWN_CAUSE_GUEST_SHUTDOWN); Looking at this, I think the non-kvm code path might want some panic handling for disabled wait, but that would be an orthogonal change and this transformation looks reasonable. > #endif > } > } s390x parts: Reviewed-by: Cornelia Huck <cornelia.huck@xxxxxxxxxx> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |