[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 28/28] libxl: xsrestrict QEMU
On Tue, 2015-12-22 at 18:45 +0000, Ian Jackson wrote: > If QEMU supports xsrestrict, pass xsrestrict=on to it (by default). > > XXX We need to do this only if xenstored supports it, and AFAICT there > is not a particularly easy way to test this.ÂÂShould we open a new > test xenstore connection to query this information ?ÂÂWe could do this > once per libxl ctx. Is this because the support is in oxenstored ^ cxenstored? Otherwise I would argue that the toolstack should expect the xenstored to be at least as new as it is. > Let's start checking libxl.ÂÂThe function that reads the physmap is > libxl__toolstack_save.ÂÂIt calls: > > * libxl__xs_directory on /physmap > Â This is safe. Potentially very many entries and hence a big return array? Are the guest quotas sufficient to not worry about this? > Either way libxl is fine. > [...] > Either way QEMU is fine. Perhaps we should add a code comment to each of these places noting that the values are guest controlled, since it is a bit more unexpected in this case than e.g. frontend dirs. > diff --git a/docs/misc/xenstore-paths.markdown b/docs/misc/xenstore- > paths.markdown > index 1dc54ac..27981de 100644 > --- a/docs/misc/xenstore-paths.markdown > +++ b/docs/misc/xenstore-paths.markdown > @@ -339,6 +339,13 @@ A PV console backend. Described in > [console.txt](console.txt) > ÂInformation relating to device models running in the domain. $DOMID is > Âthe target domain of the device model. > Â > +#### ~/device-model/$DOMID/physmap [w] > + > +Information about an HVM domain's physical memory map.ÂÂThis is > +written as well as read by device models which may run at the same > +privilege level of the guest domain.ÂÂWhen the device model ruus with "runs" > +system privilege, this path is INTERNAL. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |