[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [XTF-ARM] tests: Hypercall xen_version testing


  • To: Michal Orzel <michal.orzel@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 16 Dec 2022 12:50:40 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zPSVG6XaNdUx3Ml9rDy7eAlngk6b6nxB1OMvfceZ7sk=; b=QHMwtYDrSH3Qz/Odg2nWANnWgf8JTrZT+cS45dW9hFkvY0uwY7q9G54A9W1LRZun3CB55noKTHz4/4t85y8/0kXGAFBxgOShSWCJ+9zEIKZnlXDJPdOeQ7PdxC2vNIza4GPwYBlxJsjC4HecqZ/qPt77BYZE4nH58SibUqAyq0QPTFXjyfP85/OvAW5efYS9C0LP5FK8r4TI2kr8rIQzzmAcPgmrFwTMrgQkodefl2ItHFV6IQvxlTLcrI+a/TXOI/8f72TWSU6TAJCUiM6v/r/gxfC9yjOKnsdMMgL3c5reEk0a+p8MuBexcqNIlFPB4LliO0GoNUT4AZ5modM50g==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Px3EuzYhYPXT9PO1iBFHV+fdEPb8orpPAs4Dn1II2o3boLFyeF9u6nXl6S0MdxQpJMjTg22dfUOgKXAW5DAaZ52fsiaw2zGN2fliBKCNurY/J56kgQ4KG5F4c3ZKkHOhYZAVIoFzUdO81WKRlB6hmGOfdXZIIt/2P/GqHV/N5XexSBVT6lXlYbK2WaWRp6SMFYVkEJoeIT3Vh4ishYavpt19di8IJIOsAvhIPAjkJCBpQTZ4UrRMbhyZE8mh1tocTB0TprWAdkIhatgpwBXu8FHoB73idjQ1mgRdMfe8+X6aD5af5klZ86d9xixusBi1GdDCKn/Jy3RiQtE/cmg/sw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: sstabellini@xxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 16 Dec 2022 11:50:47 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16.12.2022 11:53, Michal Orzel wrote:
> 
> 
> On 16/12/2022 11:21, Jan Beulich wrote:
>>
>>
>> On 16.12.2022 10:30, Michal Orzel wrote:
>>> On 15/12/2022 16:48, Jan Beulich wrote:
>>>> On 15.12.2022 16:25, Michal Orzel wrote:
>>>>> --- /dev/null
>>>>> +++ b/tests/hyp-xen-version/main.c
>>>>> @@ -0,0 +1,105 @@
>>>>> +/**
>>>>> + * @file tests/hyp-xen-version/main.c
>>>>> + * @ref test-hyp-xen-version
>>>>> + *
>>>>> + * @page test-hyp-xen-version Hypercall xen_version
>>>>> + *
>>>>> + * Functional testing of xen_version hypercall.
>>>>> + *
>>>>> + * @see tests/hyp-xen-version/main.c
>>>>> + */
>>>>> +#include <xtf.h>
>>>>> +
>>>>> +const char test_title[] = "Hypercall xen_version testing";
>>>>> +
>>>>> +#define INVALID_CMD -1
>>>>> +
>>>>> +void test_main(void)
>>>>> +{
>>>>> +    int ret;
>>>>> +
>>>>> +    printk("Checking XENVER_version:\n");
>>>>> +    {
>>>>> +        /*
>>>>> +        * Version is returned directly in format: ((major << 16) | 
>>>>> minor),
>>>>> +        * so no need to check the return value for an error.
>>>>> +        */
>>>>> +        ret = hypercall_xen_version(XENVER_version, NULL);
>>>>> +        printk(" version: %u.%u\n", ret >> 16, ret & 0xFFFF);
>>>>> +    }
>>>>> +
>>>>> +    printk("Checking XENVER_extraversion:\n");
>>>>> +    {
>>>>> +        xen_extraversion_t xen_ev;
>>>>> +        memset(&xen_ev, 0, sizeof(xen_ev));
>>>>> +
>>>>> +        ret = hypercall_xen_version(XENVER_extraversion, xen_ev);
>>>>> +        if ( ret < 0 )
>>>>> +            return xtf_error("Error %d\n", ret);
>>>>
>>>> This, ...
>>>>
>>>>> +        printk(" extraversion: %s\n", xen_ev);
>>>>> +    }
>>>>> +
>>>>> +    printk("Checking XENVER_compile_info:\n");
>>>>> +    {
>>>>> +        xen_compile_info_t xen_ci;
>>>>> +        memset(&xen_ci, 0, sizeof(xen_ci));
>>>>> +
>>>>> +        ret = hypercall_xen_version(XENVER_compile_info, &xen_ci);
>>>>> +        if ( ret < 0 )
>>>>> +            return xtf_error("Error %d\n", ret);
>>>>
>>>> ... this, and ...
>>>>
>>>>> +        printk(" compiler:       %s\n", xen_ci.compiler);
>>>>> +        printk(" compile_by:     %s\n", xen_ci.compile_by);
>>>>> +        printk(" compile_domain: %s\n", xen_ci.compile_domain);
>>>>> +        printk(" compile_date:   %s\n", xen_ci.compile_date);
>>>>> +    }
>>>>> +
>>>>> +    printk("Checking XENVER_changeset:\n");
>>>>> +    {
>>>>> +        xen_changeset_info_t xen_cs;
>>>>> +        memset(&xen_cs, 0, sizeof(xen_cs));
>>>>> +
>>>>> +        ret = hypercall_xen_version(XENVER_changeset, &xen_cs);
>>>>> +        if ( ret < 0 )
>>>>> +            return xtf_error("Error %d\n", ret);
>>>>
>>>> ... this can fail because of XSM denying access. (Others can of course
>>>> also fail for this reason, but here possible failure is kind of
>>>> "intended" - see the dummy xsm_xen_version() handling.) Therefore I
>>>> would like to suggest that you also special case getting back -EPERM,
>>>> resulting in e.g. just a warning instead of an error.
>>> When writing a test I did make sure to check xsm_xen_version *for the 
>>> operations that I covered*
>>> and my understanding is as follows:
>>> For XENVER_version and XENVER_get_features, it returns 0 so deny is false.
>>> For other commands I test, xsm_default_action is called with XSM_HOOK which 
>>> returns 0 as well.
>>> So AFAICT nothing can result in setting deny to true.
>>> But even in case of setting deny to true, it would just result in copying 
>>> "<denied>" into
>>> the respective buffer. It would not alter the hypercall return value.
>>
>> For dummy itself all is fine; arrangements there suggest to me though
>> that the intention was that an actual Flask policy may be written such
>> that some of these might actually be refused. My recollection actually
>> is that when the distinction for the sub-ops was introduced, quite a
>> bit of discussion happened as to what may or may not be (optionally
>> or uniformly) be rejected.
> Ok but in any case, in the current xen_version implementation, it will just
> result in storing "<denied>". No -EPERM will be returned. So do you think it
> would make sense to add handling for it in the test even though it cannot be
> triggered?

Oh, I see my mistake now. Apologies, I take back my request.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.