[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 15/18] xen/arm: Introduce a helper to synchronize SError
On Mon, 20 Mar 2017, Stefano Stabellini wrote: > On Mon, 13 Mar 2017, Wei Chen wrote: > > We may have to isolate the SError between the context switch of > > 2 vCPUs or may have to prevent slipping hypervisor SError to guest. > > I thought the problem is not the risk of slipping hypervisor SErrors to > guest, but the risk of slipping previous guest SErrors to the next > guest. Is that right? I can see from the following patches that both situations are a risk. Although the patch could benefit from a better commit description: Reviewed-by: Stefano Stabellini <sstabellini@xxxxxxxxxx> > > So we need a helper to synchronize SError before context switching > > or returning to guest. > > > > This function will be used by the later patches in this series, we > > use "#if 0" to disable it temporarily to remove compiler warnning. > > > > Signed-off-by: Wei Chen <Wei.Chen@xxxxxxx> > > --- > > xen/arch/arm/traps.c | 11 +++++++++++ > > 1 file changed, 11 insertions(+) > > > > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c > > index 44a0281..ee7865b 100644 > > --- a/xen/arch/arm/traps.c > > +++ b/xen/arch/arm/traps.c > > @@ -2899,6 +2899,17 @@ asmlinkage void do_trap_hypervisor(struct > > cpu_user_regs *regs) > > } > > } > > > > +#if 0 > > +static void synchronize_serror(void) > > +{ > > + /* Synchronize against in-flight ld/st. */ > > + dsb(sy); > > + > > + /* A single instruction exception window */ > > + isb(); > > +} > > +#endif > > + > > asmlinkage void do_trap_hyp_serror(struct cpu_user_regs *regs) > > { > > enter_hypervisor_head(regs); > > -- > > 2.7.4 > > > > > > _______________________________________________ > > Xen-devel mailing list > > Xen-devel@xxxxxxxxxxxxx > > https://lists.xen.org/xen-devel > > > _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |