[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Problem getting VTPM to work



Hi,

I'm trying to get the VTPM to work with an Atmel TPM. The computer is an
IBM T41p running Debian Etch, and I verified that the TPM is working
well using the libtpm library:

,----
| debian:~# ./libtpm-2.0/utils/tpm_demo
| TPM successfully reset
| TPM version 1.1.0.6
| 16 PCR registers are available
| ...
`----

I installed Xen testing from source (downloaded by `hg clone
http://xenbits.xensource.com/xen-3.0-testing.hg' yesterday), and Xen
itself is running fine.

I compiled my own kernel (make linux-2.6-xen-config
CONFIGMODE=menuconfig, build...) with tpm_atmel and the Xen TPM options
included.

Before starting any DomU, i run `vtpm_managerd', and everything looks
fine:

,----
| INFO[VTPM]: Loaded saved state (dmis = 8).
| INFO[VTSP]: Loading Key into TPM.
| INFO[VTPM]: Re-attaching DMI instance 0 on domain 0 .
| INFO[TCS]: Calling TCS_OpenContext:
| INFO[VTPM]: [1]: Waiting for Guest requests & ctrl messages.
| INFO[VTPM]: [2]: Waiting for DMI messages.
`----

Once I start my test domain, the following messages are displayed:
,----
| INFO[VTPM]: [1]: RECV[14}: 0x0 0 0 0 1 c1 0 0 0 13 0 0 0 1 0 0 0 0 0 0 0 0 1
| INFO[VTPM]: Re-attaching DMI instance 1 on domain 0 .
| INFO[TCS]: Calling TCS_OpenContext:
| ERROR[VTPM]: Could not exec to launch vtpm
| INFO[VTPM]: [1]: SENT: 0x0 0 0 0 1 c1 0 0 0 a 0 0 0 0
| INFO[VTPM]: [1]: Waiting for Guest requests & ctrl messages.
| INFO[VTPM]: Launching DMI on PID = 7308
| ERROR[VTPM]: [1]: Error writing to BE. Aborting...
| INFO[VTPM]: [1]: Waiting for Guest requests & ctrl messages.
| ERROR[VTPM]: [1]: Can't open inbound fh for /dev/vtpm.
`----

(BTW: If I close vtpm_managerd with CTRL+C, vtpm_managerd segfaults and
keys are left loaded in the TPM:
,----
| INFO[VTPM]: VTPM Manager shutting down for signal 2.
| INFO[TCS]: Calling TCS_CloseContext.
| INFO[VTPM]: Child shutting down
| Segmentation fault
`----)

An exec of vtpm fails because there is now /usr/bin/vtpmd. However, if I
modify the Makefile in tools/vtpm_manager/manager to build vtpmd (added
it to the BIN= line), I get the following error:

,----
| INFO[VTPM]: [1]: RECV[14}: 0x0 0 0 0 1 c1 0 0 0 13 0 0 0 1 0 0 0 0 0 0 0 0 1
| INFO[VTPM]: Re-attaching DMI instance 1 on domain 0 .
| INFO[TCS]: Calling TCS_OpenContext:
| INFO[VTPM]: Starting VTPM.
| INFO[TCS]: Constructing new TCS:
| ERROR[TXDATA]: TPM open failedERROR in VTPM_Init_Service at 
vtpm_manager.c:769 code: TPM_IOERROR.
| ERROR[VTPM]: Closing vtpmd due to error during startup.
| INFO[VTPM]: Launching DMI on PID = 7632
| INFO[VTPM]: [1]: SENT: 0x0 0 0 0 1 c1 0 0 0 a 0 0 0 0
| INFO[VTPM]: [1]: Waiting for Guest requests & ctrl messages.
`----

So it seems this is not the way it should be...

If I replace /usr/bin/vtpmd with an empty shell script, I don't get
errors when starting my test domain, but on the first access (`cat
/sys/bus/platform/devices/tpm_vtpm/pcrs' or tpm_demo again):

,----
| INFO[VTPM]: [1]: RECV[14}: 0x0 0 0 0 1 c1 0 0 0 13 0 0 0 1 0 0 0 0 0 0 0 0 1
| INFO[VTPM]: Re-attaching DMI instance 1 on domain 0 .
| INFO[TCS]: Calling TCS_OpenContext:
| INFO[VTPM]: Launching DMI on PID = 7965
| INFO[VTPM]: [1]: SENT: 0x0 0 0 0 1 c1 0 0 0 a 0 0 0 0
| INFO[VTPM]: [1]: Waiting for Guest requests & ctrl messages.
(running `cat /sys/bus/platform/devices/tpm_vtpm/pcrs')
| INFO[VTPM]: [1]: RECV[14}: 0x0 0 0 1 0 c1 0 0 0 16 0 0 0 65 0 0 0 5 0 0 0 4 0 
0 1 1
| INFO[VTPM]: [1]: Forwarding command to DMI.
| ERROR[VTPM]: [1]: VTPM ERROR: Can't open outbound fh to dmi.
| INFO[VTPM]: [1]: SENT: 0x0 0 0 1 0 c1 0 0 0 a 0 0 0 1f
| INFO[VTPM]: [1]: Waiting for Guest requests & ctrl messages.
`----

Can you tell me what I am doing wrong?

Thanks in advance,
Martin

-- 
Martin Hermanowski
http://martin.hermanowski.name

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.