[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC 0/4] Adding Virtual Memory Fuses to Xen



On Thu, 22 Dec 2022, Julien Grall wrote:
> > What other hypervisors might or might not do should not be a factor in
> > this discussion and it would be best to leave it aside.
> 
> To be honest, Demi has a point. At the moment, VMF is a very niche use-case
> (see more below). So you would end up to use less than 10% of the normal Xen
> on Arm code. A lot of people will likely wonder why using Xen in this case?

[...]

> >  From an AMD/Xilinx point of view, most of our customers using Xen in
> > productions today don't use any hypercalls in one or more of their VMs.
> This suggests a mix of guests are running (some using hypercalls and other
> not). It would not be possible if you were using VMF.

It is true that the current limitations are very restrictive.

In embedded, we have a few pure static partitioning deployments where no
hypercalls are required (Linux is using hypercalls today but it could do
without), so maybe VMF could be enabled, but admittedly in those cases
the main focus today is safety and fault tolerance, rather than
confidential computing.


> > Xen is great for these use-cases and it is rather common in embedded.
> > It is certainly a different configuration from what most are come to
> > expect from Xen on the server/desktop x86 side. There is no question
> > that guests without hypercalls are important for Xen on ARM. >
> > As a Xen community we have a long history and strong interest in making
> > Xen more secure and also, more recently, safer (in the ISO 26262
> > safety-certification sense). The VMF work is very well aligned with both
> > of these efforts and any additional burder to attackers is certainly
> > good for Xen.
> 
> I agree that we have a strong focus on making Xen more secure. However, we
> also need to look at the use cases for it. As it stands, there will no:
>   - IOREQ use (don't think about emulating TPM)
>   - GICv3 ITS
>   - stage-1 SMMUv3
>   - decoding of instructions when there is no syndrome
>   - hypercalls (including event channels)
>   - dom0
> 
> That's a lot of Xen features that can't be used. Effectively you will make Xen
> more "secure" for a very few users.

Among these, the main problems affecting AMD/Xilinx users today would be:
- decoding of instructions
- hypercalls, especially event channels

Decoding of instructions would affect all our deployments. For
hypercalls, even in static partitioning deployments, sometimes event
channels are used for VM-to-VM notifications.


> > Now the question is what changes are necessary and how to make them to
> > the codebase. And if it turns out that some of the changes are not
> > applicable or too complex to accept, the decision will be made purely
> > from a code maintenance point of view and will have nothing to do with
> > VMs making no hypercalls being unimportant (i.e. if we don't accept one
> > or more patches is not going to have anything to do with the use-case
> > being unimportant or what other hypervisors might or might not do).
> I disagree, I think this is also about use cases. On the paper VMF look very
> great, but so far it still has a big flaw (the TTBR can be changed) and it
> would restrict a lot what you can do.

We would need to be very clear in the commit messages and documentation
that with the current version of VMF we do *not* achieve confidential
computing and we do *not* offer protections comparable to AMD SEV. It is
still possible for Xen to access guest data, it is just a bit harder.

>From an implementation perspective, if we can find a way to implement it
that would be easy to maintain, then it might still be worth it. It
would probably take only a small amount of changes on top of the "Remove
the directmap" series to make it so "map_domain_page" doesn't work
anymore after boot.

That might be worth exploring if you and Jackson agree?


One thing that would make it much more widely applicable is your idea of
hypercalls bounce buffers. VMF might work with hypercalls if the guest
always uses the same buffer to pass hypercalls parameters to Xen. That
one buffer could remain mapped in Xen for the lifetime of the VM and the
VM would know to use it only to pass parameters to Xen.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.