[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH VTPM v4 4/5] Add vtpm documentation
See the files included in this patch for details Signed-off-by: Matthew Fioravante <matthew.fioravante@xxxxxxxxxx> --- docs/misc/vtpm.txt | 355 ++++++++++++++++++++++++++++++++++-------------- stubdom/vtpm/README | 75 ++++++++++ stubdom/vtpmmgr/README | 75 ++++++++++ 3 files changed, 400 insertions(+), 105 deletions(-) create mode 100644 stubdom/vtpm/README create mode 100644 stubdom/vtpmmgr/README diff --git a/docs/misc/vtpm.txt b/docs/misc/vtpm.txt index ad37fe8..1e8887c 100644 --- a/docs/misc/vtpm.txt +++ b/docs/misc/vtpm.txt @@ -1,152 +1,297 @@ -Copyright: IBM Corporation (C), Intel Corporation -29 June 2006 -Authors: Stefan Berger <stefanb@xxxxxxxxxx> (IBM), - Employees of Intel Corp - -This document gives a short introduction to the virtual TPM support -in XEN and goes as far as connecting a user domain to a virtual TPM -instance and doing a short test to verify success. It is assumed -that the user is fairly familiar with compiling and installing XEN -and Linux on a machine. +Copyright (c) 2010-2012 United States Government, as represented by +the Secretary of Defense. All rights reserved. +November 12 2012 +Authors: Matthew Fioravante (JHUAPL), + +This document describes the virtual Trusted Platform Module (vTPM) subsystem +for Xen. The reader is assumed to have familiarity with building and installing +Xen, Linux, and a basic understanding of the TPM and vTPM concepts. + +------------------------------ +INTRODUCTION +------------------------------ +The goal of this work is to provide a TPM functionality to a virtual guest +operating system (a DomU). This allows programs to interact with a TPM in a +virtual system the same way they interact with a TPM on the physical system. +Each guest gets its own unique, emulated, software TPM. However, each of the +vTPM's secrets (Keys, NVRAM, etc) are managed by a vTPM Manager domain, which +seals the secrets to the Physical TPM. Thus, the vTPM subsystem extends the +chain of trust rooted in the hardware TPM to virtual machines in Xen. Each +major component of vTPM is implemented as a separate domain, providing secure +separation guaranteed by the hypervisor. The vTPM domains are implemented in +mini-os to reduce memory and processor overhead. + +This mini-os vTPM subsystem was built on top of the previous vTPM +work done by IBM and Intel corporation. -Production Prerequisites: An x86-based machine machine with a -Linux-supported TPM on the motherboard (NSC, Atmel, Infineon, TPM V1.2). -Development Prerequisites: An emulator for TESTING ONLY is provided +------------------------------ +DESIGN OVERVIEW +------------------------------ + +The architecture of vTPM is described below: + ++------------------+ +| Linux DomU | ... +| | ^ | +| v | | +| xen-tpmfront | ++------------------+ + | ^ + v | ++------------------+ +| mini-os/tpmback | +| | ^ | +| v | | +| vtpm-stubdom | ... +| | ^ | +| v | | +| mini-os/tpmfront | ++------------------+ + | ^ + v | ++------------------+ +| mini-os/tpmback | +| | ^ | +| v | | +| vtpmmgrdom | +| | ^ | +| v | | +| mini-os/tpm_tis | ++------------------+ + | ^ + v | ++------------------+ +| Hardware TPM | ++------------------+ + * Linux DomU: The Linux based guest that wants to use a vTPM. There many be + more than one of these. + + * xen-tpmfront.ko: Linux kernel virtual TPM frontend driver. This driver + provides vTPM access to a para-virtualized Linux based DomU. + + * mini-os/tpmback: Mini-os TPM backend driver. The Linux frontend driver + connects to this backend driver to facilitate + communications between the Linux DomU and its vTPM. This + driver is also used by vtpmmgrdom to communicate with + vtpm-stubdom. + + * vtpm-stubdom: A mini-os stub domain that implements a vTPM. There is a + one to one mapping between running vtpm-stubdom instances and + logical vtpms on the system. The vTPM Platform Configuration + Registers (PCRs) are all initialized to zero. + + * mini-os/tpmfront: Mini-os TPM frontend driver. The vTPM mini-os domain + vtpm-stubdom uses this driver to communicate with + vtpmmgrdom. This driver could also be used separately to + implement a mini-os domain that wishes to use a vTPM of + its own. + + * vtpmmgrdom: A mini-os domain that implements the vTPM manager. + There is only one vTPM manager and it should be running during + the entire lifetime of the machine. This domain regulates + access to the physical TPM on the system and secures the + persistent state of each vTPM. + + * mini-os/tpm_tis: Mini-os TPM version 1.2 TPM Interface Specification (TIS) + driver. This driver used by vtpmmgrdom to talk directly to + the hardware TPM. Communication is facilitated by mapping + hardware memory pages into vtpmmgrdom. + + * Hardware TPM: The physical TPM that is soldered onto the motherboard. + +------------------------------ +INSTALLATION +------------------------------ +Prerequisites: +-------------- +You must have an x86 machine with a TPM on the motherboard. +The only software requirement to compiling vTPM is cmake. +You must use libxl to manage domains with vTPMs. 'xm' is +deprecated and does not support vTPM. Compiling the XEN tree: ----------------------- -Compile the XEN tree as usual after the following lines set in the -linux-2.6.??-xen/.config file: +Compile and install the XEN tree as usual. Be sure to build and install +the stubdom tree. + +Compiling the LINUX dom0 kernel: +-------------------------------- -CONFIG_XEN_TPMDEV_BACKEND=m +The Linux dom0 kernel has no special prerequisites. -CONFIG_TCG_TPM=m -CONFIG_TCG_TIS=m (supported after 2.6.17-rc4) -CONFIG_TCG_NSC=m -CONFIG_TCG_ATMEL=m -CONFIG_TCG_INFINEON=m -CONFIG_TCG_XEN=m -<possible other TPM drivers supported by Linux> +Compiling the LINUX domU kernel: +-------------------------------- -If the frontend driver needs to be compiled into the user domain -kernel, then the following two lines should be changed. +The domU kernel used by domains with vtpms must +include the xen-tpmfront.ko driver. It can be built +directly into the kernel or as a module. CONFIG_TCG_TPM=y CONFIG_TCG_XEN=y +------------------------------ +VTPM MANAGER SETUP +------------------------------ + +Manager disk image setup: +------------------------- + +The vTPM Manager requires a disk image to store its +encrypted data. The image does not require a filesystem +and can live anywhere on the host disk. The image does not need +to be large. 8 to 16 Mb should be sufficient. + +# dd if=/dev/zero of=/var/vtpmmgrdom.img bs=16M count=1 + +Manager config file: +-------------------- + +The vTPM Manager domain (vtpmmgrdom) must be started like +any other Xen virtual machine and requires a config file. +The manager requires a disk image for storage and permission +to access the hardware memory pages for the TPM. An +example configuration looks like the following. + +kernel="/usr/lib/xen/boot/vtpmmgrdom.gz" +memory=16 +disk=["file:/var/vtpmmgrdom.img,hda,w"] +name="vtpmmgrdom" +iomem=["fed40,5"] + +The iomem line tells xl to allow access to the TPM +IO memory pages, which are 5 pages that start at +0xfed40000. + +Starting and stopping the manager: +---------------------------------- + +The vTPM manager should be started at boot, you may wish to +create an init script to do this. + +# xl create -c vtpmmgrdom.cfg + +Once initialization is complete you should see the following: +INFO[VTPM]: Waiting for commands from vTPM's: + +To shutdown the manager you must destroy it. To avoid data corruption, +only destroy the manager when you see the above "Waiting for commands" +message. This ensures the disk is in a consistent state. + +# xl destroy vtpmmgrdom + +------------------------------ +VTPM AND LINUX PVM SETUP +------------------------------ -You must also enable the virtual TPM to be built: +In the following examples we will assume we have Linux +guest named "domu" with its associated configuration +located at /home/user/domu. It's vtpm will be named +domu-vtpm. -In Config.mk in the Xen root directory set the line +vTPM disk image setup: +---------------------- -VTPM_TOOLS ?= y +The vTPM requires a disk image to store its persistent +data. The image does not require a filesystem. The image +does not need to be large. 8 Mb should be sufficient. -and in +# dd if=/dev/zero of=/home/user/domu/vtpm.img bs=8M count=1 -tools/vtpm/Rules.mk set the line +vTPM config file: +----------------- -BUILD_EMULATOR = y +The vTPM domain requires a configuration file like +any other domain. The vTPM requires a disk image for +storage and a TPM frontend driver to communicate +with the manager. An example configuration is given: -Now build the Xen sources from Xen's root directory: +kernel="/usr/lib/xen/boot/vtpm-stubdom.gz" +memory=8 +disk=["file:/home/user/domu/vtpm.img,hda,w"] +name="domu-vtpm" +vtpm=["backend=vtpmmgrdom,uuid=ac0a5b9e-cbe2-4c07-b43b-1d69e46fb839"] -make install +The vtpm= line sets up the tpm frontend driver. The backend must set +to vtpmmgrdom. You are required to generate a uuid for this vtpm. +You can use the uuidgen unix program or some other method to create a +uuid. The uuid uniquely identifies this vtpm to manager. +If you wish to clear the vTPM data you can either recreate the +disk image or change the uuid. -Also build the initial RAM disk if necessary. +Linux Guest config file: +------------------------ -Reboot the machine with the created Xen kernel. +The Linux guest config file needs to be modified to include +the Linux tpmfront driver. Add the following line: -Note: If you do not want any TPM-related code compiled into your -kernel or built as module then comment all the above lines like -this example: -# CONFIG_TCG_TPM is not set +vtpm=["backend=domu-vtpm"] +Currently only paravirtualized guests are supported. -Modifying VM Configuration files: ---------------------------------- +Launching and shut down: +------------------------ -VM configuration files need to be adapted to make a TPM instance -available to a user domain. The following VM configuration file is -an example of how a user domain can be configured to have a TPM -available. It works similar to making a network interface -available to a domain. +To launch a Linux guest with a vTPM we first have to start the vTPM domain. -kernel = "/boot/vmlinuz-2.6.x" -ramdisk = "/xen/initrd_domU/U1_ramdisk.img" -memory = 32 -name = "TPMUserDomain0" -vtpm = ['instance=1,backend=0'] -root = "/dev/ram0 console=tty ro" -vif = ['backend=0'] +# xl create -c /home/user/domu/vtpm.cfg -In the above configuration file the line 'vtpm = ...' provides -information about the domain where the virtual TPM is running and -where the TPM backend has been compiled into - this has to be -domain 0 at the moment - and which TPM instance the user domain -is supposed to talk to. Note that each running VM must use a -different instance and that using instance 0 is NOT allowed. The -instance parameter is taken as the desired instance number, but -the actual instance number that is assigned to the virtual machine -can be different. This is the case if for example that particular -instance is already used by another virtual machine. The association -of which TPM instance number is used by which virtual machine is -kept in the file /var/vtpm/vtpm.db. Associations are maintained by -a xend-internal vTPM UUID and vTPM instance number. +After initialization is complete, you should see the following: +Info: Waiting for frontend domain to connect.. -Note: If you do not want TPM functionality for your user domain simply -leave out the 'vtpm' line in the configuration file. +Next, launch the Linux guest +# xl create -c /home/user/domu/domu.cfg -Running the TPM: ----------------- +If xen-tpmfront was compiled as a module, be sure to load it +in the guest. -To run the vTPM, the device /dev/vtpm must be available. -Verify that 'ls -l /dev/vtpm' shows the following output: +# modprobe xen-tpmfront -crw------- 1 root root 10, 225 Aug 11 06:58 /dev/vtpm +After the Linux domain boots and the xen-tpmfront driver is loaded, +you should see the following on the vtpm console: -If it is not available, run the following command as 'root'. -mknod /dev/vtpm c 10 225 +Info: VTPM attached to Frontend X/Y -Make sure that the vTPM is running in domain 0. To do this run the -following: +If you have trousers and tpm_tools installed on the guest, you can test the +vtpm. -modprobe tpmbk +On guest: +# tcsd (if tcsd is not running already) +# tpm_version -/usr/bin/vtpm_managerd +The version command should return the following: + TPM 1.2 Version Info: + Chip Version: 1.2.0.7 + Spec Level: 2 + Errata Revision: 1 + TPM Vendor ID: ETHZ + TPM Version: 01010000 + Manufacturer Info: 4554485a -Start a user domain using the 'xm create' command. Once you are in the -shell of the user domain, you should be able to do the following as -user 'root': +You should also see the command being sent to the vtpm console as well +as the vtpm saving its state. You should see the vtpm key being +encrypted and stored on the vtpmmgrdom console. -Insert the TPM frontend into the kernel if it has been compiled as a -kernel module. +To shutdown the guest and its vtpm, you just have to shutdown the guest +normally. As soon as the guest vm disconnects, the vtpm will shut itself +down automatically. -> modprobe tpm_xenu +On guest: +# shutdown -h now -Check the status of the TPM +You may wish to write a script to start your vtpm and guest together. -> cd /sys/devices/xen/vtpm-0 -> ls -[...] cancel caps pcrs pubek [...] -> cat pcrs -PCR-00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -PCR-01: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -PCR-02: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -PCR-03: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -PCR-04: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -PCR-05: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -PCR-06: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -PCR-07: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -PCR-08: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -[...] +------------------------------ +MORE INFORMATION +------------------------------ -At this point the user domain has been successfully connected to its -virtual TPM instance. +See stubdom/vtpmmgr/README for more details about how +the manager domain works, how to use it, and its command line +parameters. -For further information please read the documentation in -tools/vtpm_manager/README and tools/vtpm/README +See stubdom/vtpm/README for more specifics about how vtpm-stubdom +operates and the command line options it accepts. -Stefan Berger and Employees of the Intel Corp diff --git a/stubdom/vtpm/README b/stubdom/vtpm/README new file mode 100644 index 0000000..b8ddac6 --- /dev/null +++ b/stubdom/vtpm/README @@ -0,0 +1,75 @@ +Copyright (c) 2010-2012 United States Government, as represented by +the Secretary of Defense. All rights reserved. +November 12 2012 +Authors: Matthew Fioravante (JHUAPL), + +This document describes the operation and command line interface +of vtpm-stubdom. See docs/misc/vtpm.txt for details on the +vTPM subsystem as a whole. + + +------------------------------ +OPERATION +------------------------------ + +The vtpm-stubdom is a mini-OS domain that emulates a TPM for the guest OS to +use. It is a small wrapper around the Berlios TPM emulator +version 0.7.4. Commands are passed from the linux guest via the +mini-os TPM backend driver. vTPM data is encrypted and stored via a disk image +provided to the virtual machine. The key used to encrypt the data along +with a hash of the vTPM's data is sent to the vTPM manager for secure storage +and later retrieval. The vTPM domain communicates with the manager using a +mini-os tpm front/back device pair. + +------------------------------ +COMMAND LINE ARGUMENTS +------------------------------ + +Command line arguments are passed to the domain via the 'extra' +parameter in the VM config file. Each parameter is separated +by white space. For example: + +extra="foo=bar baz" + +List of Arguments: +------------------ + +loglevel=<LOG>: Controls the amount of logging printed to the console. + The possible values for <LOG> are: + error + info (default) + debug + +clear: Start the Berlios emulator in "clear" mode. (default) + +save: Start the Berlios emulator in "save" mode. + +deactivated: Start the Berlios emulator in "deactivated" mode. + See the Berlios TPM emulator documentation for details + about the startup mode. For all normal use, always use clear + which is the default. You should not need to specify any of these. + +maintcmds=<1|0>: Enable to disable the TPM maintenance commands. + These commands are used by tpm manufacturers and thus + open a security hole. They are disabled by default. + +hwinitpcr=<PCRSPEC>: Initialize the virtual Platform Configuration Registers + (PCRs) with PCR values from the hardware TPM. Each pcr specified by + <PCRSPEC> will be initialized with the value of that same PCR in TPM + once at startup. By default all PCRs are zero initialized. + Value values of <PCRSPEC> are: + all: copy all pcrs + none: copy no pcrs (default) + <N>: copy pcr n + <X-Y>: copy pcrs x to y (inclusive) + + These can also be combined by comma separation, for example: + hwinitpcrs=5,12-16 + will copy pcrs 5, 12, 13, 14, 15, and 16. + +------------------------------ +REFERENCES +------------------------------ + +Berlios TPM Emulator: +http://tpm-emulator.berlios.de/ diff --git a/stubdom/vtpmmgr/README b/stubdom/vtpmmgr/README new file mode 100644 index 0000000..42ebbc8 --- /dev/null +++ b/stubdom/vtpmmgr/README @@ -0,0 +1,75 @@ +Copyright (c) 2010-2012 United States Government, as represented by +the Secretary of Defense. All rights reserved. +November 12 2012 +Authors: Matthew Fioravante (JHUAPL), + +This document describes the operation and command line interface +of vtpmmgrdom. See docs/misc/vtpm.txt for details on the +vTPM subsystem as a whole. + + +------------------------------ +OPERATION +------------------------------ + +The vtpmmgrdom implements a vTPM manager who has two major functions: + + - Securely store encryption keys for each of the vTPMS + - Regulate access to the hardware TPM for the entire system + +The manager accepts commands from the vtpm-stubdom domains via the mini-os +TPM backend driver. The vTPM manager communicates directly with hardware TPM +using the mini-os tpm_tis driver. + + +When the manager starts for the first time it will check if the TPM +has an owner. If the TPM is unowned, it will attempt to take ownership +with the supplied owner_auth (see below) and then create a TPM +storage key which will be used to secure vTPM key data. Currently the +manager only binds vTPM keys to the disk. In the future support +for sealing to PCRs should be added. + +------------------------------ +COMMAND LINE ARGUMENTS +------------------------------ + +Command line arguments are passed to the domain via the 'extra' +parameter in the VM config file. Each parameter is separated +by white space. For example: + +extra="foo=bar baz" + +List of Arguments: +------------------ + +owner_auth=<AUTHSPEC>: Set the owner auth of the TPM. The default + is the well known owner auth of all ones. + +srk_auth=<AUTHSPEC>: Set the SRK auth for the TPM. The default is + the well known srk auth of all zeroes. + The possible values of <AUTHSPEC> are: + well-known: Use the well known auth (default) + random: Randomly generate an auth + hash: <HASH>: Use the given 40 character ASCII hex string + text: <STR>: Use sha1 hash of <STR>. + +tpmdriver=<DRIVER>: Which driver to use to talk to the hardware TPM. + Don't change this unless you know what you're doing. + The possible values of <DRIVER> are: + tpm_tis: Use the tpm_tis driver to talk directly to the TPM. + The domain must have access to TPM IO memory. (default) + tpmfront: Use tpmfront to talk to the TPM. The domain must have + a tpmfront device setup to talk to another domain + which provides access to the TPM. + +The following options only apply to the tpm_tis driver: + +tpmiomem=<ADDR>: The base address of the hardware memory pages of the + TPM (default 0xfed40000). + +tpmirq=<IRQ>: The irq of the hardware TPM if using interrupts. A value of + "probe" can be set to probe for the irq. A value of 0 + disabled interrupts and uses polling (default 0). + +tpmlocality=<LOC>: Attempt to use locality <LOC> of the hardware TPM. + (default 0) -- 1.7.10.4 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |