[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 7/7] tools, libxl: handle the iomem parameter with the memory_mapping hcall



On 04/01/2014 11:26 AM, Julien Grall wrote:
On 1 April 2014 16:13, Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx> wrote:
On Tue, 2014-03-25 at 03:02 +0100, Arianna Avanzini wrote:
Currently, the configuration-parsing code concerning the handling of the
iomem parameter only invokes the XEN_DOMCTL_iomem_permission hypercall.
This commit lets the XEN_DOMCTL_memory_mapping hypercall be invoked
after XEN_DOMCTL_iomem_permission when the iomem parameter is parsed
from a domU configuration file, so that the address range can be mapped
to the address space of the domU. The hypercall is invoked only in case
of domains using an auto-translated physmap.

Sorry for not noticing this sooner but I've just been looking at this
again and it seems that XEN_DOMCTL_memory_mapping is a superset of
XEN_DOMCTL_iomem_permission.

AFAICT XEN_DOMCTL_memory_mapping does exactly the same
iomem_{permit,deny}_access as XEN_DOMCTL_iomem_permission and then iff
the guest is paging_mode_translate sets up a p2m mapping for it.
(There's also some extra debug logging, lets ignore it).

IOW could the toolstack's existing call to XEN_DOMCTL_iomem_permission
not be completely replaced with a call to XEN_DOMCTL_memory_mapping and
have exactly the same affect as this patch, without the need for the
toolstack to infer the paging mode of the guest?

I think the answer is yes, can someone confirm?

For x86 HVM, AFAIU only QEMU knows the memory layout of the guest.
So we can't call XEN_DOMCTL_memory_mapping here (at least map
the range in the p2m).

One subtle distinction is that it appears that XEN_DOMCTL_memory_mapping
cannot grant access to mfns for which it does not it self have access.
That seems reasonable though.

In fact the fact that XEN_DOMCTL_iomem_permission does not make this
check could be a security issue -- a domain with permission to build
domains could construct a sock puppet domain which it could give access
to ports which it cannot see. Or maybe this is deliberate and isolates
the builder domain from needing h/w permissions, in which case is
XEN_DOMTL_memory_mapping wrong? Daniel?

I think XEN_DOMCTL_memory_mapping is correct (and therefore
XEN_DOMCTL_iomem_permission) wrong. It make senses which the
builder domain patch series from Daniel:
see http://lists.xen.org/archives/html/xen-devel/2014-03/msg03553.html

Currently, XEN_DOMCTL_memory_mapping is allowed to device model domains
whereas XEN_DOMCTL_iomem_permission is restricted to dom0 only.  This is
probably the reason why an iomem_access_permitted check is not present
in XEN_DOMCTL_iomem_permission.

If FLASK is enabled, both domctls do the same permission checking based
on the security label of the memory range: that the current domain has
the RESOURCE__{ADD,REMOVE}_IOMEM permission, and the target domain has
the RESOURCE__USE permission.  This prevents the sock-puppet method from
being used to permit arbitrary accesses to created domains, but requires
that these restrictions be done at the granularity of the security
labels, which may not be as flexible as preferred in some setups.

While the current design does allow for a domain builder to manage
resources that it cannot directly use on its own, I don't think this was
ever really a design decision.  There are few (if any) security gains
from being able to block a domain builder from accessing resources if it
can create domains that access these resources, since it can just create
sock-puppet domains or corrupt the domain with access.

I think changing XEN_DOMCTL_iomem_permission to require the current
domain to pass an iomem_access_permitted check before permitting access
is reasonable.  It will require some adjustments to my domain builder
series which currently relies on the old behavior, but those should be
fairly simple (cloning the rangesets instead of swapping).  If this
change is made, I think similar changes to the other rangeset domctls
(irq, ioport) should be done at the same time.

--
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.