[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Questions about the usage of the vTPM implemented in Xen 4.3



On 02/12/2014 04:38 AM, Jordi Cucurull Juan wrote:
Hello Daniel,

On 02/11/2014 04:26 PM, Daniel De Graaf wrote:
On 02/11/2014 05:01 AM, Jordi Cucurull Juan wrote:
Hello Daniel,

Thanks for your thorough answer. I have a few comments below.

On 02/10/2014 08:40 PM, Daniel De Graaf wrote:
On 02/05/2014 11:52 AM, Jordi Cucurull Juan wrote:
Dear all,

I have recently configured a Xen 4.3 server with the vTPM enabled
and a
guest virtual machine that takes advantage of it. After playing a bit
with it, I have a few questions:

1.According to the documentation, to shutdown the vTPM stubdom it is
only needed to normally shutdown the guest VM. Theoretically, the vTPM
stubdom automatically shuts down after this. Nevertheless, if I
shutdown
the guest the vTPM stubdom continues active and, moreover, I can start
the machine again and the values of the vTPM are the last ones there
were in the previous instance of the guest. Is this normal?

The documentation is in error here; while this was originally how the
vTPM
domain behaved, this automatic shutdown was not reliable: it was not
done
if the peer domain did not use the vTPM, and it was incorrectly
triggered
by pv-grub's use of the vTPM to record guest kernel measurements
(which was
the immediate reason for its removal). The solution now is to either
send a
shutdown request or simply destroy the vTPM upon guest shutdown.

An alternative that may require less work on your part is to destroy
the vTPM stub domain during a guest's construction, something like:

#!/bin/sh -e
xl destroy "$1-vtpm" || true
xl create $1-vtpm.cfg
xl create $1-domu.cfg

Allowing a vTPM to remain active across a guest restart will cause the
PCR values extended by pv-grub to be incorrect, as you observed in your
second email. In order for the vTPM's PCRs to be useful for quotes or
releasing sealed secrets, you need to ensure that a new vTPM is started
if and only if it is paired with a corresponding guest.

I see a potential threat due to this behaviour (please correct me if I
am wrong).

Assume an administrator of Dom0 becomes malicious. Since the hypervisor
does not enforce the shut down of the vTPM domain, the malicious
administrator could try the following: 1) make a copy of the peer
domain, 2) manipulate the copy of the peer domain and disable its
measurements, 3) boot the original peer domain, 4) switch it off or
pause it, 5) boot the manipulated copy of the peer domain.

Then, the shown PCR values of the manipulated copy of the peer domain
are the ones measured by the original peer domain during the first boot.
But the manipulated copy is the one actually running. Hence, this could
not be detected nor by quoting the vTPM neither the pTPM.


A malicious dom0 has a much simpler attack vector: start the domain with
a custom version of pv-grub that extends arbitrary measurements instead
of the real kernel's measurements. Then, a user kernel with disabled or
similarly false measuring capabilities can be booted. Alternatively,
if XSM polices do not restrict it, a debugger could be attached to the
guest so that it can be manipulated online.

This is the reason why I wanted to measure Dom0, to detect a possible
manipulation, e.g. of a custom version of pv-grub. Nevertheless, still
the administrator could try to inject a manipulated copy of it into the
system after booting it. Hence, I agree with the solutions you propose
below.


May be, one possible solution could be to enforce an XSM FLASK policy to
prevent any user in Dom0 from destroying, shutting down or pausing a
domain. Then, measure the policy when Dom0 starts into a PCR of the
phsyical TPM. Nevertheless, on one hand I do not know if this is
feasible and, in the other hand, this prevents the system from
destroying the vTPM domain when the peer domain shuts down.

The solution to this problem is to disaggregate dom0 and relocate the
domain building component to a stub domain that is completely measured
in the pTPM (perhaps by TBOOT). The domain builder could use a static
library of domains to build (hardware domain and TPM Manager built only
once; vTPM and pv-grub domain pairs built on request). An XSM policy
could then restrict vTPM communication so that only correctly built
guests are allowed to talk to their paired vTPM. In this case, dom0
would have permission to shut down either VM, but could not start a
replacement.


I understand this cannot be done with the current implementation of Xen.
Are there any plans to do this in the future?

Yes, although I am not sure how easy it will be to upstream the changes.
The hypervisor changes to make the hardware domain distinct from dom0
are mostly present in 4.4 - it just lacks the hooks to postpone IOMMU
setup for the hardware domain instead of dom0. I plan to post these for
review once 4.4 is branched.

The domain builder needed to run this setup also exists but the version
which handles build requests depends on an out-of-tree patch to the
hypervisor implementing inter-domain message sending (IVC). Since V4V is
more feature-complete than IVC, I think changing our domain builder to
use V4V instead of IVC will be necessary for it to be incorporated into
Xen. The bootstrapping version of the domain builder (which would be run
with domain ID 0) does not depend on IVC, but only builds a single list
of domains from its ramdisk.

2.In the documentation it is recommended to avoid accessing the
physical
TPM from Dom0 at the same time than the vTPM Manager stubdom.
Nevertheless, I currently have the IMA and the Trousers enabled in
Dom0
without any apparent issue. Why is not recommended directly accessing
the physical TPM of Dom0?

While most of the time it is not a problem to have two entities
talking to
the physical TPM, it is possible for the trousers daemon in dom0 to
interfere
with key handles used by the TPM Manager. There are also certain
operations
of the TPM that may not handle concurrency, although I do not believe
that
trousers uses them - SHA1Start, the DAA commands, and certain audit
logs
come to mind.

The other reason why it is recommended to avoid pTPM access from
dom0 is
because the ability to send unseal/unbind requests to the physical TPM
makes
it possible for applications running in dom0 to decrypt the TPM
Manager's
data (and thereby access vTPM private keys).

At present, sharing the physical TPM between dom0 and the TPM
Manager is
the only way to get full integrity checks.

OK, I see. Hence leaving the TPM support enabled in Dom0 opens a
security problem to the vTPM. But if we do not enable the support, the
integrity of Dom0 cannot be proved using the TPM (e.g. by remote
attestation).

Right. Since dom0 currently must be trusted (as discussed above) this is
currently the best way to handle the dom0 attestation problem.


3.If it is not recommended to directly accessing the physical TPM in
Dom0, which is the advisable way to check the integrity of this
domain?
With solutions such as TBOOT and IntelTXT?

While the TPM Manager in Xen 4.3/4.4 does not yet have this
functionality,
an update which I will be submitting for inclusion in Xen 4.5 has the
ability to get physical TPM quotes using a virtual TPM. Combined
with an
early domain builder, the eventual goal is to have dom0 use a vTPM for
its integrity/reporting/sealing operations, and use the physical TPM
only
to secure the secrets of vTPMs and for deep quotes to provide fresh
proofs
of the system's state.

This sounds really good. I look forward to try it in Xen 4.5!!


Thank you for your answers!
Jordi.







--
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.