[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [RFC 0/4] Adding Virtual Memory Fuses to Xen
Hi Xen Developers, My team at Riverside Research is currently spending IRAD funding to prototype next-generation secure hypervisor design ideas on Xen. In particular, we are prototyping the idea of Virtual Memory Fuses for Software Enclaves, as described in this paper: https://www.nspw.org/papers/2020/nspw2020-brookes.pdf. Note that that paper talks about OS/Process while we have implemented the idea for Hypervisor/VM. Our goal is to emulate something akin to Intel SGX or AMD SEV, but using only existing virtual memory features common in all processors. The basic idea is not to map guest memory into the hypervisor so that a compromised hypervisor cannot compromise (e.g. read/write) the guest. This idea has been proposed before, however, Virtual Memory Fuses go one step further; they delete the hypervisor's mappings to its own page tables, essentially locking the virtual memory configuration for the lifetime of the system. This creates what we call "Software Enclaves", ensuring that an adversary with arbitrary code execution in the hypervisor STILL cannot read/write guest memory. With this technique, we protect the integrity and confidentiality of guest memory. However, a compromised hypervisor can still read/write register state during traps, or refuse to schedule a guest, denying service. We also recognize that because this technique precludes modifying Xen's page tables after startup, it may not be compatible with all of Xen's potential use cases. On the other hand, there are some uses cases (in particular statically defined embedded systems) where our technique could be adopted with minimal friction. With this in mind our goal is to work with the Xen community to upstream this work as an optional feature. At this point, we have a prototype implementation of VMF on Xen (the contents of this RFC patch series) that supports dom0less guests on arm 64. By sharing our prototype, we hope to socialize our idea, gauge interest, and hopefully gain useful feedback as we work toward upstreaming. ** IMPLEMENTATION ** In our current setup we have a static configuration with dom0 and one or two domUs. Soon after boot, Dom0 issues a hypercall through the xenctrl interface to blow the fuse for the domU. In the future, we could also add code to support blowing the fuse automatically on startup, before any domains are un-paused. Our Xen/arm64 prototype creates Software Enclaves in two steps, represented by these two functions defined in xen/vmf.h: void vmf_unmap_guest(struct domain *d); void vmf_lock_xen_pgtables(void); In the first, the Xen removes mappings to the guest(s) On arm64, Xen keeps a reference to all of guest memory in the directmap. Right now, we simply walk all of the guest second stage tables and remove them from the directmap, although there is probably a more elegant method for this. Second, the Xen removes mappings to its own page tables. On arm64, this also involves manipulating the directmap. One challenge here is that as we start to unmap our tables from the directmap, we can't use the directmap to walk them. Our solution here is also bit less elegant, we temporarily insert a recursive mapping and use that to remove page table entries. ** LIMITATIONS and other closing thoughts ** The current Xen code has obviously been implemented under the assumption that new pages can be mapped, and that guest virtual addresses can be read, so this technique will break some Xen features. However, in the general case (in particular for static workloads where the number of guest's is not changed after boot) we've seen that Xen rarely needs to access guest memory or adjust its page tables. We see a lot of potential synergy with other Xen initiatives like Hyperlaunch for static domain allocation, or SEV support driving new hypercall interfaces that don't require reading guest memory. These features would allow VMF (Virtual Memory Fuses) to work with more configurations and architectures than our current prototype, which only supports static configurations on ARM 64. We have not yet studied how the prototype VMF implementation impacts performance. On the surface, there should be no significant changes. However, cache effects from splitting the directmap superpages could introduce a performance cost. Additionally, there is additional latency introduced by walking all the tables to retroactively remove guest memory. This could be optimized by reworking the Xen code to remove the directmap. We've toyed with the idea, but haven't attempted it yet. Finally, our initial testing suggests that Xen never reads guest memory (in a static, non-dom0-enchanced configuration), but have not really explored this thoroughly. We know at least these things work: Dom0less virtual serial terminal Domain scheduling We are aware that these things currently depend on accessible guest memory: Some hypercalls take guest pointers as arguments Virtualized MMIO on arm needs to decode certain load/store instructions It's likely that other Xen features require guest memory access. Also, there is currently a lot of debug code that isn't needed for normal operation, but assumes the ability to read guest memory or walk page tables in an exceptional case. The xen codebase will need to be audited for these cases, and proper guards inserted so this code doesn't pagefault. Thanks for allowing us to share our work with you. We are really excited about it, and we look forward to hearing your feedback. We figure those working with Xen on a day to day basis will likely uncover details we have overlooked. Jackson
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |