[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] pre Sandy bridge IOMMU support (gm45)
On Tue, Jan 26, 2016 at 04:37:05AM -0700, Jan Beulich wrote: > (re-adding xen-devel) > > >>> On 26.01.16 at 12:28, <thierry.laurion@xxxxxxxxx> wrote: > > Iommu=0 let the whole Qubes system work, without enforcing hardware > > compartimentalisation (iommu is enforced in software mode) > > > > When iommu=no-igfx is enforced, shell console boot up works flawlessly. All > > domu machines get booted up. A system hang will happen at the moment a domu > > machine does graphic rendering, > > And this is (other than I originally implied) without passing through > the IGD to the DomU? If so, I can't see the difference between a > guest rendering to its display (and vncviewer or whatever frontend > you use converting this to rendering on the host) and rendering > which originates in the host. Not sure if relevant, but window content is mapped from PV domU directly into X server (in dom0) address space, using xc_map_foreign_pages. It is done by hacking XShmAttach function. Not sure what graphics driver do with it next. Theoretically it could be possible that driver will direct IGD to do DMA directly from that place, but I guess it does not. > > Jan > > > which often results in tray icon being > > fuzzy just before the system gets unresponsive(netvm showing it get > > connected through nm - applet rendering) , or a notification starting to > > show up while the system hangs before it disappears with some minor/major > > visual glitch being visible (usb-vm showing device attribution to another > > vm). > > > > Again, if iommu=0 is passed to xen, there is no system hang while not > > having any added isolation security from usb devices being in a domu and > > network devices being in another one, while applications sit in seperate > > ones. This is why Qubes strongly suggest but doesn't require iommu;stronger > > isolation. > > IGD has a bad history of iommu support. A quick list : > > > > -http://lists.freedesktop.org/archives/dri-devel/2013-January/033662.html > > -https://lists.ubuntu.com/archives/kernel-team/2013-February/024796.html > > > > Isolation of netvm and usb is a required use case in Qubes. IGD passthrough > > would be nice to have, but isn't required. I don't really see why someone > > would want to passthrouh IGD to a Windows domu, gm45 based laptops are > > definitely not gaming laptops. Since i915 and gm45 have a bad iommu > > history, just being able to completely disable iommu for IGD would suffice. > > > > > > Thierry > > > > Le mar. 26 janv. 2016 05:52, Jan Beulich <JBeulich@xxxxxxxx> a Ãcrit : > > > >> >>> On 25.01.16 at 22:49, <thierry.laurion@xxxxxxxxx> wrote: > >> > The case is 1) disabling iommu for IGD, unilaterally since i915 + gm45 > >> > doesn't play well together. Iommu is still desired to isolate usb and > >> > network devices, so we don't want to disable iommu completely. The side > >> > effect of this would be to have IGD only for dom0, which would also > >> > completely make sense in this use case. > >> > > >> > The point is the iommu=no-igfx doesn't fix the issue, since remapping > >> seems > >> > to still happen for IGD. Does that make sense ? > >> > >> It certainly may make sense, just that in what you have written so > >> far I don't think I've been able to spot any evidence thereof. Since, > >> as you say, nothing interesting gets logged by Xen, you must be > >> drawing this conclusion from something (or else you wouldn't say > >> "doesn't fix the issue"). > >> > >> Jan > >> > >> > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > http://lists.xen.org/xen-devel -- Best Regards, Marek Marczykowski-GÃrecki Invisible Things Lab A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |