[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 8/8] xl.pod.1: improve documentation of FLASK commands

On 01/04/2012 05:47 AM, Stefano Stabellini wrote:
> On Thu, 15 Dec 2011, Daniel De Graaf wrote:
>> On 12/15/2011 03:56 PM, Konrad Rzeszutek Wilk wrote:
>>>> There is already an example policy file in 
>>>> tools/flask/policy/policy/modules/xen/xen.te
>>>> although it will likely require additional rules to be run in enforcing 
>>>> mode.
>>>> The policy is not built as part of the normal build process, but it can be
>>>> built by running "make -C tools/flask/policy". If using Fedora 16 (or 
>>>> systems
>>>> with a checkpolicy version >24) the Makefile will need to be adjusted to
>>>> produce policy version 24 which is the latest version supported by Xen.
>>> Is there a howto on how to use it for newbies? Or how to apply policies
>>> against a domain? Would it make sense to have that as part of the 'man
>>> xl' ?
>> I just sent an updated example policy that demonstrates most of the features
>> that can be used without dom0 disaggregation. It has two main types for domU:
>> domU_t is a domain that can communicate with any other domU_t
>> isolated_domU_t can only communicate with dom0
>> There is also a resource type for device passthrough, configured for domU_t.
>> To label the PCI device 3:2.0 for passthrough, run:
>> ./tools/flask/utils/flask-label-pci 0000:03:02.0 system_u:object_r:nic_dev_t
>> I'm not sure this belongs in "man xl" except for a mention of how to set the
>> security label of a newly created domain. There is already a 
>> docs/misc/xsm-flask.txt
>> that explains a bit about the policy creation; this may need to be updated
>> to better explain how to use FLASK.
> It would be great to have a short introduction to flask in the xl man
> page. What do you think about the following?
> diff -r 50117a4d1a2c docs/man/xl.pod.1
> --- a/docs/man/xl.pod.1       Mon Jan 02 12:43:07 2012 +0000
> +++ b/docs/man/xl.pod.1       Wed Jan 04 10:46:47 2012 +0000
> @@ -997,6 +997,20 @@ Get information about how much freeable 
>  =head2 FLASK
> +B<FLASK> is a security framework that defines a mandatory access control 
> policy
> +providing fine-grained controls over Xen domains, allowing the policy writer
> +to define what interactions between domains, devices, and the hypervisor are
> +permitted. Some example of what you can do using XSM/FLASK:
> + - Prevent two domains from communicating via event channels or grants
> + - Control which domains can use device passthrough (and which devices)
> + - Restrict or audit operations performed by privileged domains
> + - Prevent a privileged domain from arbitrarily mapping pages from other
> +   domains.
> +
> +See the following document for more details:
> +
> +L<http://xenbits.xen.org/docs/unstable/misc/xsm-flask.txt>
> +
>  =over 4
>  =item B<getenforce>
> As you can see, I linked docs/misc/xsm-flask.txt from the xl man page,
> however xsm-flask.txt still references xend so it needs to be updated.

This is a good introduction; I have an update to docs/misc/xsm-flask.txt
that references xl and incorporates some of the changes in the example
policy (will post momentarily).

> Also it would be great to link the example policy too, but that one is
> not online because it is not under docs and it is not installed by
> default either. Maybe we need to move the example policy to docs? Or
> maybe it is best to install a copy of it to /etc/xen by default?
The example policy doesn't really belong in docs because it needs to be
compiled to be usable, and this depends on a number of other files (all
files under tools/flask/policy/policy, to be exact). Compiling and
installing FLASK policy during the normal build process (conditional on
FLASK_ENABLE to avoid adding SELinux build tools to build dependencies?)
would be the best solution. The policy must be installed to /boot, not
/etc/xen, because the initial policy load happens prior to starting dom0.

Daniel De Graaf
National Security Agency

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.