[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 7/7] x86/viridian: implement the crash MSRs



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 20 March 2017 12:38
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Wei Liu
> <wei.liu2@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [PATCH 7/7] x86/viridian: implement the crash MSRs
> 
> >>> On 17.03.17 at 10:57, <paul.durrant@xxxxxxxxxx> wrote:
> > @@ -234,6 +247,10 @@ void cpuid_viridian_leaves(const struct vcpu *v,
> uint32_t leaf,
> >
> >          res->a = u.lo;
> >          res->b = u.hi;
> > +
> > +        if ( viridian_feature_mask(d) & HVMPV_crash_ctl )
> > +            res->d = CPUID3D_CRASH_MSRS;
> 
> |= (for consistency with other code as well as to avoid the need
> to touch this line again going forward)

Ok.

> 
> > @@ -603,6 +620,37 @@ int wrmsr_viridian_regs(uint32_t idx, uint64_t val)
> >              update_reference_tsc(d, 1);
> >          break;
> >
> > +    case HV_X64_MSR_CRASH_P0:
> > +    case HV_X64_MSR_CRASH_P1:
> > +    case HV_X64_MSR_CRASH_P2:
> > +    case HV_X64_MSR_CRASH_P3:
> > +    case HV_X64_MSR_CRASH_P4:
> > +        BUILD_BUG_ON(HV_X64_MSR_CRASH_P4 -
> HV_X64_MSR_CRASH_P0 >
> > +                     ARRAY_SIZE(d->arch.hvm_domain.viridian.crash_param));
> 
> >= as it looks.

Yes.

> 
> > +        idx -= HV_X64_MSR_CRASH_P0;
> > +        d->arch.hvm_domain.viridian.crash_param[idx] = val;
> > +        break;
> > +
> > +    case HV_X64_MSR_CRASH_CTL:
> > +    {
> > +        HV_CRASH_CTL_REG_CONTENTS ctl;
> > +
> > +        ctl.AsUINT64 = val;
> > +
> > +        if ( !ctl.CrashNotify )
> > +            break;
> > +
> > +        printk(XENLOG_G_INFO "d%d: VIRIDIAN CRASH: %lx %lx %lx %lx
> %lx\n",
> > +               d->domain_id,
> > +               d->arch.hvm_domain.viridian.crash_param[0],
> > +               d->arch.hvm_domain.viridian.crash_param[1],
> > +               d->arch.hvm_domain.viridian.crash_param[2],
> > +               d->arch.hvm_domain.viridian.crash_param[3],
> > +               d->arch.hvm_domain.viridian.crash_param[4]);
> > +        break;
> > +    }
> > +
> >      default:
> >          if (idx >= VIRIDIAN_MSR_MIN && idx <= VIRIDIAN_MSR_MAX)
> >              gdprintk(XENLOG_WARNING, "unimplemented MSR %08x\n",
> > @@ -730,6 +778,25 @@ int rdmsr_viridian_regs(uint32_t idx, uint64_t *val)
> >          break;
> >      }
> >
> > +    case HV_X64_MSR_CRASH_P0:
> > +    case HV_X64_MSR_CRASH_P1:
> > +    case HV_X64_MSR_CRASH_P2:
> > +    case HV_X64_MSR_CRASH_P3:
> > +    case HV_X64_MSR_CRASH_P4:
> > +        idx -= HV_X64_MSR_CRASH_P0;
> > +        *val = d->arch.hvm_domain.viridian.crash_param[idx];
> > +        break;
> 
> I think it would be better to reproduce the BUILD_BUG_ON() here.
> 

Ok.

> > +    case HV_X64_MSR_CRASH_CTL:
> > +    {
> > +        HV_CRASH_CTL_REG_CONTENTS ctl;
> > +
> > +        ctl.CrashNotify = 1;
> 
> You leak 63 bits of hypervisor stack here.
> 

Yes. Should use a struct initializer instead.

> > --- a/xen/include/asm-x86/hvm/viridian.h
> > +++ b/xen/include/asm-x86/hvm/viridian.h
> > @@ -96,6 +96,7 @@ struct viridian_domain
> >      union viridian_hypercall_gpa hypercall_gpa;
> >      struct viridian_time_ref_count time_ref_count;
> >      union viridian_reference_tsc reference_tsc;
> > +    uint64_t crash_param[5];
> 
> Are these really per-domain values (normally MSRs are per-vCPU)?

I don't think the usage described in section 4.3 really warrants per-vCPU. I've 
not looked at the exact sequence of events but I'm fairly sure the MSRs are 
written by the Windows crash kernel, which is single threaded.

> And don't they need migrating?
> 

I don't think so, given that they are only written just prior to the domain 
dying anyway.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.