[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] vTPM Manager shuts down



On 04/18/2013 07:16 AM, Jordi Cucurull Juan wrote:
Hi all,

I am trying to set up the development version of Xen with support for
virtual TPMs. I am having an issue starting the vTPM Manager. Basically
I create the vTPM Manager stub domain as detailed in the documentation:

make install-vtpmmgr
dd if=/dev/zero of=/var/xen/vtpmmgr-stubdom.img bs=16M count=1
vim /etc/xen/vtpmmgr-stubdom.cfg

     kernel="/usr/local/lib/xen/boot/vtpmmgr-stubdom.gz"
     memory=16
     disk=["file:/var/xen/vtpmmgr-stubdom.img,hda,w"]
     name="vtpmmgr"
     iomem=["fed40,5"]

xl create -c /etc/xen/vtpmmgr-stubdom.cfg

Nevertheless, when the stub domain is launched it automatically shuts
down (see the trace below). Am I doing something wrong? Is there
something that can produce this behaviour?

This config matches my (working) config for the vtpmmgr domain, so there's
nothing immediately wrong here. The only differences I note are that I am
using the kernel blkback (and an LVM partition) for the disk image, which
shouldn't make any difference from the stubdom's perspective, and that I
have XSM enabled in the hypervisor (and so have a seclabel defined).

Daemon running with PID 6347
Xen Minimal OS!
   start_info: 0xa3000(VA)
     nr_pages: 0x1000
   shared_inf: 0xcaff0000(MA)
      pt_base: 0xa6000(VA)
nr_pt_frames: 0x5
     mfn_list: 0x9b000(VA)
    mod_start: 0x0(VA)
      mod_len: 0
        flags: 0x0
     cmd_line:
   stack:      0x5a7a0-0x7a7a0
MM: Init
       _text: 0x0(VA)
      _etext: 0x397f4(VA)
    _erodata: 0x46000(VA)
      _edata: 0x48c00(VA)
stack start: 0x5a7a0(VA)
        _end: 0x9adc0(VA)
   start_pfn: ae
     max_pfn: 1000
Mapping memory range 0x400000 - 0x1000000
setting 0x0-0x46000 readonly
skipped 0x1000
MM: Initialise page allocator for b4000(b4000)-1000000(1000000)
MM: done
Demand map pfns at 1001000-2001001000.
Heap resides at 2001002000-4001002000.
Initialising timer interface
Initialising console ... done.
gnttab_table mapped at 0x1001000.
Initialising scheduler
Thread "Idle": pointer: 0x2001002050, stack: 0xd0000
Thread "xenstore": pointer: 0x2001002800, stack: 0xe0000
xenbus initialised on irq 1 mfn 0x224faa
Thread "shutdown": pointer: 0x2001002fb0, stack: 0xf0000
Dummy main: start_info=0x7a8a0
Thread "main": pointer: 0x2001003760, stack: 0x100000
"main"
Shutting down ()
Shutdown requested: 3
Thread "shutdown" exited.
INFO[VTPM]: Starting vTPM manager domain
INFO[VTPM]: Option: Using tpm_tis driver
******************* BLKFRONT for device/vbd/768 **********


backend at /local/domain/0/backend/qdisk/7/768
Failed to read /local/domain/0/backend/qdisk/7/768/feature-barrier.
32768 sectors of 512 bytes
**************************
blk_open(device/vbd/768) -> 3
============= Init TPM BACK ================
Thread "tpmback-listener": pointer: 0x20010043f0, stack: 0xf0000
============= Init TPM TIS Driver ==============
IOMEM Machine Base Address: FED40000
Enabled Localities: 0
Map 1 (fed40, ...) at 0x1006000 failed: -1.

This is apparently the error, although I would expect the iomem line
to allow this mapping (-1 is EPERM, assuming it is correctly passing
the error number). Does anything appear on the hypervisor's console
(xl dmesg) that would correspond with this error?

If you can, the output of "xl debug-key q" while the domain is running
would be useful. Since it's crashing on startup, this may be difficult
to produce - changing the existing sleep(2) in stubdom/vtpmmgr/vtpmmgr.c
to a longer time should suffice. The output will go to xl dmesg, and
the lines of interest would be:

(XEN) General information for domain 5:
(XEN)     refcnt=3 dying=0 pause_count=0
(XEN)     nr_pages=7168 xenheap_pages=5 shared_pages=0 paged_pages=0 
dirty_cpus={} max_pages=7424
(XEN)     handle=3097d8b9-8d80-4bde-94b6-978c98c37296 vm_assist=00000000
(XEN) Rangesets belonging to domain 5:
(XEN)     I/O Ports  { }
(XEN)     Interrupts { }
(XEN)     I/O Memory { fed40 }

Note: my config contains "iomem=['fed40,1']" not "iomem=['fed40,5']" so your
output will differ there.

Do_exit called!
base is 0x10fcb8 caller is 0x1f08a
base is 0x10fcd8 caller is 0x28483
base is 0x10fd88 caller is 0x28558
base is 0x10fde8 caller is 0x2706c
base is 0x10fe28 caller is 0x27084
base is 0x10fe38 caller is 0x1bc69
base is 0x10fe78 caller is 0x6f9c
base is 0x10ff38 caller is 0x34e5
base is 0x10ff68 caller is 0x1fbbc
base is 0x10ffe8 caller is 0x33da

Thanks in advance!
Jordi.


For future reference, you can resolve these addresses (0x1f08a etc) using

gdb $XEN_BUILD_DIR/stubdom/mini-os-x86_64-vtpmmgr/mini-os

and then running

(gdb) x/i 0x1f08a

for each frame. That's not needed this time since the error location is
already known: HYPERVISOR_mmu_update failed.

--
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.