[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4] Sanity check xsave area when migrating or restoring from older Xen verions



On Tue, 21 Oct 2014 20:00:53 +0100
Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:

> On 21/10/14 19:40, Don Koch wrote:
> > Xen 4.3 and older transferred a maximum sized xsave area (as if all
> > the available XCR0 bits were set); the new version only transfers
> > based on the actual XCR0 bits. This may result in a smaller area if
> > the last sections were missing (e.g., the LWP area from an AMD
> > machine). If the size doesn't match the XCR0 derived size, the part of
> > the xsave area between the XCR0 specified and transferred size is
> > checked for zero data. If any part of the overflow area is non-zero,
> > we return with an error.
> >
> > Signed-off-by: Don Koch <dkoch@xxxxxxxxxxx>
> > ---
> > Changes in V4:
> > - Removed check of size base on xfeature_mask.
> > - Unsign some ints.
> > - Change %d to %u for unsigned ints.
> > - Move printk to only print if non-zero data found.
> >
> > Changes in V3:
> > - use h->data for zero check
> > - remove max size check (use size that was sent)
> > - fix error message (drop first byte value)
> > - fix "for" issues
> >
> > Changes in V2:
> > - Add check for size.
> > - Add check for non-zero data in unused part of block.
> >
> >  xen/arch/x86/hvm/hvm.c | 28 ++++++++++++++++------------
> >  1 file changed, 16 insertions(+), 12 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index f0e1edc..c2780d2 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -1971,6 +1971,7 @@ static int hvm_load_cpu_xsave_states(struct domain 
> > *d, hvm_domain_context_t *h)
> >      struct vcpu *v;
> >      struct hvm_hw_cpu_xsave *ctxt;
> >      struct hvm_save_descriptor *desc;
> > +    unsigned int i, overflow_start;
> >  
> >      /* Which vcpu is this? */
> >      vcpuid = hvm_load_instance(h);
> > @@ -2011,15 +2012,8 @@ static int hvm_load_cpu_xsave_states(struct domain 
> > *d, hvm_domain_context_t *h)
> >                          save_area) + XSTATE_AREA_MIN_SIZE);
> >          return -EINVAL;
> >      }
> > -    size = HVM_CPU_XSAVE_SIZE(xfeature_mask);
> > -    if ( desc->length > size )
> > -    {
> > -        printk(XENLOG_G_WARNING
> > -               "HVM%d.%d restore mismatch: xsave length %u > %u\n",
> > -               d->domain_id, vcpuid, desc->length, size);
> > -        return -EOPNOTSUPP;
> > -    }
> >      h->cur += sizeof (*desc);
> > +    overflow_start = h->cur;
> >  
> >      ctxt = (struct hvm_hw_cpu_xsave *)&h->data[h->cur];
> >      h->cur += desc->length;
> > @@ -2038,10 +2032,20 @@ static int hvm_load_cpu_xsave_states(struct domain 
> > *d, hvm_domain_context_t *h)
> >      size = HVM_CPU_XSAVE_SIZE(ctxt->xcr0_accum);
> >      if ( desc->length > size )
> >      {
> > -        printk(XENLOG_G_WARNING
> > -               "HVM%d.%d restore mismatch: xsave length %u > %u\n",
> > -               d->domain_id, vcpuid, desc->length, size);
> > -        return -EOPNOTSUPP;
> > +        /* Make sure missing bytes are all zero. */
> 
> Please make a reference to the bug in this comment, so the reasons for
> the strange check is a little more obvious given a glance at the code.
> 
> Perhaps
> 
> /*
>  * Xen-4.3 and older used to send longer-than-needed xsave regions. 
> Permit loading the record if the extra data is all zero
>  */
> 
> (suitably wrapped, given its natural indentation)

OK, will do.

> > +        for ( i = size; i < desc->length; i++ )
> > +        {
> > +            if ( h->data[overflow_start + i] )
> > +            {
> > +                printk(XENLOG_G_WARNING
> > +                       "HVM%u.%u restore mismatch: xsave length %u > %u\n",
> > +                       d->domain_id, vcpuid, desc->length, size);
> > +                printk(XENLOG_G_WARNING
> > +                       "HVM%u.%u restore mismatch: xsave has non-zero data 
> > starting at %#x\n",
> > +                       d->domain_id, vcpuid, i);
> 
> This should be one message.  Also note that, while a lot of code gets it
> wrong, domain_id is signed while vcpuid is unsigned.

I had suggested one message. Jan said it should be two.

I'll change it to one combined message (after letting Jan get his say).

Also, after dredging through the includes (and I easily may have taken
a wrong turn) it looks like domain_id in this structure is declared as
a domid_t which is typedefed as an uint16_t. But, I'll change it back to
%d if you insist.

> Perhaps
> 
> "HVM%d.%u restore: xsave length %#x > %#x with non-zero data at %#x\n"
> 
> It is quite unhelpful to report 3 related numbers, two in one base with
> one in a different base.  I feel hex is more useful here, when comparing
> the offsets against the manuals.

Sounds good to me.

> ~Andrew

Thanks,
-d

> > +                return -EOPNOTSUPP;
> > +            }
> > +        }
> >      }
> >      /* Checking finished */
> >  
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.