[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 0/2][XTF] xtf/vpmu VPMU tests
On 04/25/2017 02:50 PM, Andrew Cooper wrote: I think overall it was fairly easy to use and was very useful for the kind of testing we wanted to do for VPMU. It helped me find 3 bugs - one hypervisor crash, one potential state leak and one missing functionality. I didnt find any bugs/corner cases in XTF but here are the few nice to have features that you can consider adding -On 24/04/17 18:54, Mohit Gambhir wrote:Mohit Gambhir (2): xtf/vpmu: Add Intel PMU MSR addresses xtf/vpmu: MSR read/write tests for VPMU arch/x86/include/arch/msr-index.h | 11 + tests/vpmu/Makefile | 9 + tests/vpmu/main.c | 504 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 524 insertions(+) create mode 100644 tests/vpmu/Makefile create mode 100644 tests/vpmu/main.cIndependently from the content of this series, how have you found the XTF to use? Any issues or unexpected corner cases which can be improved? 1. As said in responses to one of the comments, I think it will be nice to have some kind of "expected failure" return type that indicates that it is a known bug. Thats particularly useful if the person writing the tests is not a Xen developer. You seem to have indicated that xtf_error() means that it is a known error but that wasn't intuitive to me. 2. Its useful to print __LINE__ number from where an error/failure is reported. 3. Is test_main() considered 1 test? If so it would be useful to define a test at a finer granularity. For instance in VPMU case, each MSR test should really be a separate test. It would require adding some semantics to names of the functions that define a test. That way, its easier to keep track of the health of the software I think.4. Is there a way to build just one test? Something like make test-hvm64-vpmu? Thanks, Mohit ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |