|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] Sanity check xsave area when migrating or restoring from older Xen verions
On Sat, 18 Oct 2014 00:36:28 +0100
Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
> On 17/10/2014 18:11, Don Koch wrote:
> > Xen 4.3 and older transferred a maximum sized xsave area (as if all
> > the available XCR0 bits were set); the new version only transfers
> > based on the actual XCR0 bits. This may result in a smaller area if
> > the last sections were missing (e.g., the LWP area from an AMD
> > machine). If the size doesn't match the XCR0 derived size, the size is
> > checked against the maximum size and the part of the xsave area
> > between the actual and maximum used size is checked for zero data. If
> > either the max size check or any part of the overflow area is
> > non-zero, we return with an error.
> >
> > Signed-off-by: Don Koch <dkoch@xxxxxxxxxxx>
> > ---
> > xen/arch/x86/hvm/hvm.c | 29 ++++++++++++++++++++++++++++-
> > 1 file changed, 28 insertions(+), 1 deletion(-)
> >
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index f0e1edc..bdebc67 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -1971,6 +1971,8 @@ static int hvm_load_cpu_xsave_states(struct domain
> > *d, hvm_domain_context_t *h)
> > struct vcpu *v;
> > struct hvm_hw_cpu_xsave *ctxt;
> > struct hvm_save_descriptor *desc;
> > + u32 eax, ebx, ecx, edx;
> > + uint8_t *overflow_start;
> >
> > /* Which vcpu is this? */
> > vcpuid = hvm_load_instance(h);
> > @@ -2041,8 +2043,32 @@ static int hvm_load_cpu_xsave_states(struct domain
> > *d, hvm_domain_context_t *h)
> > printk(XENLOG_G_WARNING
> > "HVM%d.%d restore mismatch: xsave length %u > %u\n",
> > d->domain_id, vcpuid, desc->length, size);
>
> This warning should be elided if we pass the zero-check.
OK. I'll combine it with the zero check message.
> > - return -EOPNOTSUPP;
> > +
> > + /* Check to see if the xsave_area is the maximum size.
> > + If so, it is likely the save is from an older xen. */
> > + cpuid_count(XSTATE_CPUID, 0, &eax, &ebx, &ecx, &edx);
>
> This check is bogus for heterogeneous hardware. We have no way of
> calculating what the maximum xsave area size was on the sending side
> should have been...
I didn't think migration or save/restore was supported between heterogeneous
architectures. I'll drop this check.
> > + if ( desc->length !=
> > + ecx + offsetof(struct hvm_hw_cpu_xsave, save_area) ) {
> > + printk(XENLOG_G_WARNING
> > + "HVM%d.%d restore mismatch: xsave length %u != %u\n",
> > + d->domain_id, vcpuid, desc->length, ecx +
> > + (u32)offsetof(struct hvm_hw_cpu_xsave, save_area));
> > + return -EOPNOTSUPP;
> > + }
> > +
> > + /* Make sure unused bytes are all zero. */
>
> ...which is fine, as we literally only care whether the overflow data is
> all zeros of not.
>
> It is is all empty, we really don't care how large it was supposed to be
> before. If it is not empty, then something is certainly corrupt in this
> record.
>
> > + overflow_start = (uint8_t *)&ctxt->save_area -
> > + offsetof(struct hvm_hw_cpu_xsave, save_area);
>
> I am having a hard time working out whether this expression is correct.
> As ctxt is superimposed on top of h->data, would it not make sense to
> check h->data between the expected end, and the real length? h->data is
> even the correct type for doing this with.
That's OK. I had a hard time working it out in the first place. I'll try
a spin using h->data. I'll have to cache the correct offset as h->cur has
been mucked with by this time.
> > + for (int i = size; i < desc->length; i++)
>
> Style
Is not really defined for "for". I checked the CODING_STYLE file and it only
defines style for "if" and "while". I found more examples of "for" statements
with no spaces inside the parens; so, I went with that. Will fix.
Do you want me to update the CODING_STYLES file, too? (In a separate patch,
of course.)
> > + {
> > + if ( *(overflow_start + i) )
> > + {
> > + printk(XENLOG_G_WARNING
> > + "HVM%d.%d restore mismatch: xsave[%d] has non-zero
> > data: %x\n",
> > + d->domain_id, vcpuid, i, *(overflow_start +i));
>
> I don't think it is useful to print the value of the first non-zero
> byte. I would reduce it just to "junk found in overflow space".
>
> ~Andrew
>
> > + return -EOPNOTSUPP;
> > + }
> > + }
> > }
> > /* Checking finished */
> >
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |