[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Questions about the in-tree Flask policy



On 09/17/2014 11:38 AM, Wei Liu wrote:
Hi Daniel

I'm working to get XSM tested in OSSTest. After hacking for some days
I've got to the point that I actually have test cases to run.

Our plan is to at least replicate three Debian smoke tests:
   test-amd64-amd64-xl-xsm
   test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm
   test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm

The -xsm suffix is to distinguish them from the normal non-xsm tests.

However I cannot get the in-tree policy to work with OSSTest. It's much
stricter than I had anticipated. Guest creation is OK, but it doesn't
seem to allow "xl save" to run.

This is a policy error; the example policy should permit domain save
operations on domU_t, but currently does not do so.

In my test-amd64-amd64-xl-xsm run:

2014-09-17 15:05:39 Z executing ssh ... root@xxxxxxxxxxxxx xl save 
debian.guest.osstest image
Saving to image new xl format (info 0x0/0x0/824)
xc: error: xc_alloc_hypercall_buffer: mmap failed (22 = Invalid argument): 
Internal error
xc: error: Couldn't allocate to_send array: Internal error
libxl: error: libxl_dom.c:1673:libxl__xc_domain_save_done: saving domain: 
domain responded to suspend request: Cannot allocate memory
Failed to save domain, resuming domain
libxl: error: libxl.c:475:libxl__domain_resume: xc_domain_resume failed for 
domain 1: Permission denied
2014-09-17 15:05:40 Z command nonzero waitstatus 256: ssh -o 
StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=100 -o 
ServerAliveInterval=100 -o PasswordAuthentication=no -o 
ChallengeResponseAuthentication=no -o 
UserKnownHostsFile=tmp/t.known_hosts_standalone.test-amd64-amd64-xl-xsm 
root@xxxxxxxxxxxxx xl save debian.guest.osstest image
status 256 at Osstest/TestSupport.pm line 391.
+ rc=1
+ date -u +%Y-%m-%d %H:%M:%S Z exit status 1
2014-09-17 15:05:40 Z exit status 1
+ exit 1

The guest has seclabel='system_u:system_r:domU_t'.

In 'xl dmesg' there are some avc messages (removed duplicated ones):

(XEN) Cannot bind IRQ4 to dom0. In use by 'ns16550'.
(XEN) Cannot bind IRQ2 to dom0. In use by 'cascade'.
(XEN) avc:  denied  { cacheflush } for domid=0 target=1 
scontext=system_u:system_r:dom0_t tcontext=system_u:system_r:domU_t 
tclass=domain2
(XEN) avc:  denied  { stat } for domid=0 target=1 
scontext=system_u:system_r:dom0_t tcontext=system_u:system_r:domU_t tclass=mmu
(XEN) avc:  denied  { getvcpucontext } for domid=0 target=1 
scontext=system_u:system_r:dom0_t tcontext=system_u:system_r:domU_t 
tclass=domain
(XEN) avc:  denied  { cacheflush } for domid=0 target=3 
scontext=system_u:system_r:dom0_t tcontext=system_u:system_r:domU_t 
tclass=domain2

I tried to look at the policy file(s), only to find out that there's a
bunch of files that have excessive amount of information. I'm certainly
not an XSM expert and have no intention to become one at the moment. :-)

True, and you shouldn't have to be an expert to report errors (your current
report was exactly what was needed to fix the policy).

In the future, any AVC denied messages in the output when under normal test
operation (i.e. not when a VM is misbehaving) should be treated as a bug in
the XSM policy even when it doesn't cause real failures.  Usually, the answer
is to add the permission to the proper part of the policy, and the denial
will cause operations to break (like the above errors).  In some other cases,
such as cacheflush, the process continues but was not able to perform an
important operation.  If this is something that can be easily added to the
test script as a failure condition, that would be helpful (but this is
certainly not a prerequisite for adding the tests in the first place).

--
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.