[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] Hypervisor, x86 emulation deprivileged



> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of
> Andrew Cooper
> Sent: 05 July 2016 18:26
> To: Anthony Perard; xen-devel@xxxxxxxxxxxxx; Jan Beulich
> Subject: Re: [Xen-devel] [RFC] Hypervisor, x86 emulation deprivileged
> 
> On 05/07/16 12:22, Anthony PERARD wrote:
> > Hi,
> >
> > I've taken over the work from Ben to have a deprivileged mode in the
> > hypervisor, but I'm unsure about which direction to take.
> 
> You should begin with an evaluation of available options, identifying
> which issues are mitigated, those which are not, and which which new
> risks are introduced.
> 

Personally I can't see why we want to invent a new way of executing code in the 
hypervisor. What's wrong with modelling each emulator as a PV unikernel, 
running with all the usual domain and vcpu constructs? Doing anything else 
seems like:

a) a lot of work for little gain
b) re-inventing the wheel

  Paul

> E.g.
> 
> 1) De-privileging the x86 instruction emulator into hypervisor ring3
> Issues mitigated:
> * Out of bounds pointer accesses.
> Issues not mitigated:
> * On-stack state corruption.
> Risks:
> * Introduces what is basically 3rd type of vcpu, including all the
> subtly that comes user processes.
> 
> 
> Performance is strictly a secondary consideration;  Make something which
> works first, then make it fast.  Never the opposite way around.
> 
> Another exercise which might be useful is to look at the recent XSAs and
> identified which of them could have been mitigated by one of the
> suggestions.
> 
> ~Andrew
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.