[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH V3 05/12] xen: Introduce vm_event

On 01/31/2015 08:24 AM, Tamas K Lengyel wrote:
On Fri, Jan 30, 2015 at 6:25 PM, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx> wrote:
On 01/29/2015 04:46 PM, Tamas K Lengyel wrote:

To make it easier to review the renaming process of mem_event -> vm_event,
the process is broken into three pieces, of which this patch is the first.
In this patch the vm_event subsystem is introduced and hooked into the
process, but it is not yet used anywhere.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@xxxxxxxxxxxx>


diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index f20e89c..d6d403a 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -525,6 +525,18 @@ static XSM_INLINE int
xsm_mem_event_op(XSM_DEFAULT_ARG struct domain *d, int op)
       return xsm_default_action(action, current->domain, d);
+static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain
*d, int mode, int op)
+    return xsm_default_action(action, current->domain, d);
+static XSM_INLINE int xsm_vm_event_op(XSM_DEFAULT_ARG struct domain *d,
int op)
+    return xsm_default_action(action, current->domain, d);


diff --git a/xen/xsm/flask/policy/access_vectors
index 1da9f63..a4241b5 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -250,6 +250,7 @@ class hvm
   # XEN_DOMCTL_set_access_required
+    vm_event
   # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap}
   #  source = the domain making the hypercall
   #  target = domain whose memory is being shared

This implies that device model domains should be allowed to use the
covered by xsm_vm_event_op but not those covered by xsm_vm_event_control.
If this is how the eventual operations are intended to be used, the FLASK
permissions also need to be split so that a similar distinction can be made
the policy.

After looking at the later patches in this series, this appears to be a flaw
the existing FLASK hooks that got copied over.  While it is still useful to
it  may be better to make the split in a separate patch from the renames.
that VM events apply to more than just HVM domains, it may be useful to move
the new permission(s) from class hvm to either domain2 or mmu.

Daniel De Graaf
National Security Agency

Moving it to domain2 would make sense to me. The namings seem to be
pretty poor so I have a hard time understanding why xsm_vm_event_op
and xsm_vm_event_control differ when it comes to device model domains.
The event_op corresponds to memops for access, paging and sharing
while event_control for the domctl that enables/disables the rings. So
yes, I think splitting the name for these separating things would make
sense to clarify what they represent but whether that restriction on
device model domains was intentional I'm not sure about.


If the device model stubdom does not use (or plan to use) the memory_op
hypercall, then there is no reason to allow it to make this hypercall,
and the XSM check should be changed from XSM_DM_PRIV to XSM_PRIV.  From
a quick look, this seems to be true, but I would defer to someone who
has actually used or developed this subsystem.

As far as the naming, several other hypercalls such as tmem have a
distinction between "use" and "control" that is reflected in the XSM
policy.  From a quick look at how the hypercalls work, the domctl seems
to be a toolstack-only feature that is set when building a domain, while
the mem_event_op hypercall is used by a helper process.

I think it might be possible to move the helper process to a stub domain
when creating a very disaggregated system.  In that case, a distinct
permission for its hypercalls would be useful to let the stub domain
perform sharing operations without being able to turn sharing on and
off.  Otherwise the current single permission (moved to domain2) should
be sufficient.

Daniel De Graaf
National Security Agency

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.