[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xense-devel] Run vTPM in its own VM?


  • To: "Stefan Berger" <stefanb@xxxxxxxxxx>
  • From: "Scarlata, Vincent R" <vincent.r.scarlata@xxxxxxxxx>
  • Date: Thu, 14 Sep 2006 12:59:27 -0700
  • Cc: Xense-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 14 Sep 2006 12:59:41 -0700
  • List-id: "A discussion list for those developing security enhancements for Xen." <xense-devel.lists.xensource.com>
  • Thread-index: AcbYN6GibOVfEzsvQrOn553SmPPa+QAACAJA
  • Thread-topic: [Xense-devel] Run vTPM in its own VM?

Current, I guess they are "trusted," but this is an artifact of Xen not yet having a measurement infrastructure for measuring domains that get launched. It is not the intention to have these domains be implicitly trusted.
 
-Vinnie


From: Stefan Berger [mailto:stefanb@xxxxxxxxxx]
Sent: Thursday, September 14, 2006 12:53 PM
To: Scarlata, Vincent R
Cc: Fischer, Anna; Xense-devel@xxxxxxxxxxxxxxxxxxx; xense-devel-bounces@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xense-devel] Run vTPM in its own VM?


Are DomU1vTPM and DomU2vTPM 'trusted' or are these domains also implementing a transitive trust model with  integrity measurements taken inside of them?

-- Stefan

xense-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote on 09/14/2006 02:30:40 PM:

> No, there is only 1 vtpm_manager per platform. As you noted the vTPMs
> have a VTPM_MULTI_VM switch. This switch does 2 things. 1) determines if
> it reads vTPM commands from a backend or from a FIFO, and 2) if it sends
> vtpm control commands to the manager via a tpm frontend or another FIFO.
>
> So in multivm mode, it looks like the following (which will either clear
> things up, or completely confuse them).
>
>                         |----- DomU1vTPM ---| |----- DomU1 ----|
>                       /--> FE ~ vtpmd ~ BE <---> FE ~ vtpm drv |
> |----- Dom 0 ------|  | |-------------------| |----------------|
> vtpm_managerd ~ BE <--+
>                       | |----- DomU2vTPM ---| |----- DomU2 ----|
>                       \--> FE ~ vtpmd ~ BE <---> FE ~ vtpm drv |
>                         |-------------------| |----------------|
>
>
>                       ^                      ^
>                       |                      |
>                save/load cmds             tpm cmds
>
>
> The vtpm still has this code in it. The missing code is in the manager.
> To support both models the manager had become very complex. In the multi
> vm case, only control commands came in. In the single vm case, the
> manager received tpm commands or control commands (open/close vtpm),
> handle the control commands and forward tpm commands to a vtpm, while
> accepting control commands (save/load nv) on a different channel. This
> was all done through 1 command handler with a mess of #ifdefs.
>
> I rewrote the handler routines and threading routines to be more
> generalized. Now everything is mode agnostic to the number of vms except
> manager/vtpmd.c. This file defines the necessary threads, FIFO, and
> handlers instances. The current file is a couple hundred lines and sets
> everything up for single vm. I plan on writing another vtpmd.c which
> sets the manager up for multivm mode. I will then use some sort of a
> selector to determine which file to compile based on your mode or maybe
> build 2 apps. This is why I call it incomplete.
>                                              
> -Vinnie
>
> -----Original Message-----
> From: Fischer, Anna [mailto:anna.fischer@xxxxxx]
> Sent: Thursday, September 14, 2006 10:27 AM
> To: Scarlata, Vincent R; Xense-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xense-devel] Run vTPM in its own VM?
>
> Thanks for your reply.
>
> But do I understand it correctly that in your design you will have a
> vTPM manager running in each vTPM BE domain? And you have the vTPM then
> talking again through FIFOs to the vTPM manager who talks to the BE?
>
> However, the code seems to be designed so that the vTPMs talk directly
> to the BE. Is that what you mean with that the code for this
> configuration is broken? According to the currently implemented design I
> don't see how such a direct communication can work as for example
> capabilities like saving and loading NVRAM won't work without having the
> vTPM manager in between, right?
>
> Anna
>
> -----Original Message-----
> From: Scarlata, Vincent R [mailto:vincent.r.scarlata@xxxxxxxxx]
> Sent: Donnerstag, 14. September 2006 17:59
> To: Fischer, Anna; Xense-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xense-devel] Run vTPM in its own VM?
>
> Sorry Anna, the documentation is both slightly out of date, and slightly
> ahead of its time. :-)
>
> The vtpm manager was architected to allows each vtpm instance to run in
> its own VM, but during the last restructuring of the code, support for
> this configuration was broken. It's now incomplete. Due to other
> commitments, I won't be able to get back to this immediately, I hope to
> submit a patch to re-enable this config options within a month-ish.
>
> The way it looked and will look again is the following. A standard
> config would be a Dom0, DomU1 guest, DomU1vTPM vtpm domain, ... DomUn,
> DomUnvTPM. DomU1 has a tpm FE, for which DomU1vTPM has the BE. Similarly
> DomU2 has a tpm FE, for which DomU2vTPM has the BE. This allows direct
> communication between the DomU and it's vTPM, as you mention below. Then
> all the DomU*vTPM domains have tpm FEs, for which the domain housing the
> vtpm manager is the BE. By default this is Dom0, but provided that the
> tpm device can be assigned to a different domain, this can be put in any
> domain. The vtpm_manager's domain has the tpm driver.
>
> This is a little heavier weight than running everything in dom0, but it
> removes the manager from being a bottle neck in tpm access, since all
> DomUs can access their vTPMs simultaneously (though the manager can
> still only handle 1 vtpm request at a time to save internal states).
> Also isolation between vtpms is established.
>
> Do you need this functionality, or are you just doing thought
> experiments?
>
> Hopes this answers your questions,
>
> -Vinnie Scarlata
>   Trusted Platform Lab
>   Corporate Technology Group
>   Intel Corporation
>
> -----Original Message-----
> From: xense-devel-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xense-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Fischer,
> Anna
> Sent: Thursday, September 14, 2006 2:01 AM
> To: Xense-devel@xxxxxxxxxxxxxxxxxxx
> Subject: [Xense-devel] Run vTPM in its own VM?
>
> The README of the current Xen unstable version says that setting
> VTPM_MULTI_VM allows running each vTPM in its own VM. However, compiling
> with this option doesn't work on my machine and the code doesn't seem to
> be complete for this option.
>
> Did I miss to configure something or is the current implementation in
> Xen not really ready for running a vTPM in a separate VM?
>
> Can you explain to me how a communication will look like for the planned
> implementation in Xen? Will all communication continue to go through the
> vTPM manager and the vTPM manager talks to a kind of FE that transmits
> TPM commands to a BE running in a separate domain? Or is it possible to
> set up direct connections between a user domain TPM FE and the vTPM
> running in an isolated VM?
>
> Regards,
> Anna
>
> _______________________________________________
> Xense-devel mailing list
> Xense-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xense-devel
>
> _______________________________________________
> Xense-devel mailing list
> Xense-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xense-devel
_______________________________________________
Xense-devel mailing list
Xense-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xense-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.