[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] x86/Intel: virtualize support for cpuid faulting



On 14/10/16 20:36, Kyle Huey wrote:
> On Fri, Oct 14, 2016 at 10:18 AM, Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx> wrote:
>> On a slightly separate note, as you have just been a successful
>> guinea-pig for XTF, how did you find it?  It is a very new (still
>> somewhat in development) system but the project is looking to try and
>> improve regression testing in this way, especially for new features.  I
>> welcome any feedback.

FWIW, I have just done some library improvements and rebased the test.

> It's pretty slick.  Much better than what Linux has ;)
>
> I do think it's a bit confusing that xtf_has_fep is false on PV guests.

Now you point it out, I can see how it would be confusing.  This is due
to the history of FEP.

The sequence `ud2; .ascii 'xen'; cpuid` has been around ages (it
predates faulting and hardware with mask/override MSRs), and is used by
PV guests to specifically request Xen's CPUID information, rather than
getting the real hardware information.

There is also an rdtscp variant for PV guests, used for virtual TSC modes.

In Xen 4.5, I introduced the same prefix to HVM guests, but for
arbitrary instructions.  This was for the express purpose of testing the
x86 instruction emulator.

As a result, CPUID in PV guests is the odd case out.

>
> It might also be nice to (at least optionally) have xtf_assert(cond,
> message) so instead of
>
> if ( cond )
>     xtf_failure(message);
>
> you can write
>
> xtf_assert(!cond, message);
>
> A bonus of doing this is that the framework could actually count how
> many checks were run.  So for the HVM tests (which don't run the FEP
> bits) instead of getting "Test result: SKIP" you could say "Test
> result: 9 PASS 1 SKIP" or something similar.

Boot with "hvm_fep" on the command line and the tests should end up
reporting success.

It is a deliberate design choice to have a single result from a single
run of the test kernel, and is a design inherited from XenServer's
internal testing system (which runs ~10k machine hours of testing every
single day).

During development, I can see the attraction of having a unittest style
report, but it becomes counterproductive at larger scale.  Subdividing
the specific failures is not helpful for the high level view, so is
omitted to simply status reporting.

Once a test is developed and stable, it should always be reporting
SUCCESS; anything else indicates an issue to be fixed.  When
investigating failures, the actual text of the failure message is the
important bit of information.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.