[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op...
On Tue, Dec 06, 2016 at 01:46:12PM +0000, Paul Durrant wrote: > ...as a set of hypercalls to be used by a device model. [...] > +Introduction > +------------ > + > +A previous proposal for a 'DMOP' was put forward by Ian Jackson on the 1st > +of August. This proposal seem very promising, however a problem was > +identified with it, that this proposal addresses. > + > +The aim of DMOP, as before, is to prevent a compromised device model from > +compromising domains other then the one it is associated with. (And is > +therefore likely already compromised). > + > +The previous proposal adds a DMOP hypercall, for use by the device models, > +which places the domain ID in a fixed place, within the calling args, > +such that the priv driver can always find it, and not need to know any > +further details about the particular DMOP in order to validate it against > +previously set (vie ioctl) domain. > + > +The problem occurs when you have a DMOP with references to other user memory > +objects, such as with Track dirty VRAM (As used with the VGA buffer). > +Is this case, the address of this other user object needs to be vetted, > +to ensure it is not within a restricted address ranges, such as kernel > +memory. The real problem comes down to how you would vet this address - > +the idea place to do this is within the privcmd driver, since it would have > +knowledge of the address space involved. However, with a principal goal > +of DMOP to allow privcmd to be free from any knowledge of DMOP's sub ops, > +it would have no way to identify any user buffer addresses that need > +checking. The alternative of allowing the hypervisor to vet the address > +is also problematic, since it has no knowledge of the guest memory layout. > + Could you perhaps rewrite this section? The reference to "Ian Jackson on the ..." and some other bits aren't really useful in the context of a design doc. The comparison between the old and new proposal isn't needed IMHO. There has never been an old implementation / design doc in xen.git. > +Validation by privcmd driver > +------------------------------ > + > +If the privcmd driver has been restricted to specific domain (using a > + new ioctl), when it received an op, it will: > + > +1. Check hypercall is DMOP. > + > +2. Check domid == restricted domid. > + > +3. For each @nr_buffers in @buffers: Check @h and @len give a buffer > + wholey in the user space part of the virtual address space. (e.g., > + on Linux use access_ok()). > + > + > +Xen Implementation > +------------------ > + > +Since a DMOP sub of may need to copy or return a buffer from the guest, "sub of" -> "sub op" ? > +as well as the DMOP itself to git the initial buffer, functions for doing > +this would be written as below. Note that care is taken to prevent > +damage from buffer under or over run situations. If the DMOP is called > +with insufficient buffers, zeros will be read, while extra is ignored. > + [...] > # make_device_model(priv, dm_dom, hvm_dom) > diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h > index 2c83544..cc37752 100644 > --- a/tools/libxc/include/xenctrl.h > +++ b/tools/libxc/include/xenctrl.h Is DMOP considered stable in this patch? I guess not, given the header is enclosed by __XEN__ and __XEN_TOOLS__. Let's keep it in libxc for now. Eventually it will need to be moved to tools/libs. > @@ -41,6 +41,7 @@ > #include <xen/sched.h> > #include <xen/memory.h> > #include <xen/grant_table.h> > +#include <xen/hvm/dm_op.h> > #include <xen/hvm/params.h> > #include <xen/xsm/flask_op.h> > #include <xen/tmem.h> > diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c > index d57c39a..8e49635 100644 > --- a/tools/libxc/xc_private.c > +++ b/tools/libxc/xc_private.c > @@ -776,6 +776,76 @@ int xc_ffs64(uint64_t x) > return l ? xc_ffs32(l) : h ? xc_ffs32(h) + 32 : 0; > } > > +int do_dm_op(xc_interface *xch, domid_t domid, unsigned int nr_bufs, ...) > +{ > + int ret = -1; > + struct { > + void *u; > + void *h; > + } *bounce; > + DECLARE_HYPERCALL_BUFFER(xen_dm_op_buf_t, bufs); > + va_list args; > + unsigned int idx; > + > + bounce = calloc(nr_bufs, sizeof(*bounce)); > + if ( bounce == NULL ) > + goto fail1; > + > + bufs = xc_hypercall_buffer_alloc(xch, bufs, sizeof(*bufs) * nr_bufs); > + if ( bufs == NULL ) > + goto fail2; > + > + va_start(args, nr_bufs); > + for (idx = 0; idx < nr_bufs; idx++) for ( idx .. idx++ ) Wei. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |