[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] some problems to start vTPM vtpm-stubdom



> From: Ian Campbell [mailto:Ian.Campbell@xxxxxxxxxx]
> Sent: Tuesday, June 25, 2013 5:16 PM
> To: Xu, Quan
> Cc: xen-users@xxxxxxxxxxxxx; Daniel De Graaf
> Subject: Re: [Xen-users] some problems to start vTPM vtpm-stubdom
> 
> On Thu, 2013-06-20 at 03:18 +0000, Xu, Quan wrote:
> > Hi community,
> >    there are some problems to start vTPM vtpm-stubdom following
> > docs/misc/vtpm.txt.
> 
> You might have better luck getting help with your problems if you CC the vTPM
> maintainer as listed in the MAINTAINERS file in the source tree. I have added
> Daniel here now.

Campbell, 
    Thanks in advance. :)
    More resource will focus on it, my team will try to enable below 3 topics:
1. enable xen vTPM to allow programs to interact with a TPM in a virtual 
machine, the same way they interact with a TPM on the physical system.
2. intergrate xen vTPM in openstack cloud. virtual machine in OpenStack can 
work with Xen vTPM.
3. promote TPM 2.0 in Xen. Xen vTPM can run on TPM 2.0.
 
> 
> >  When I start vtpm-stbdom, the vtpmmgr-stubdom will print out:
> > ===
> > ERROR[VTPM]: LoadKey failure: Unrecognized uuid!
> > 69743ae0-9d4a-4ad6-9819-e602085b6792
> > ERROR[VTPM]: Failed to load key
> > ERROR in vtpmmgr_LoadHashKey at vtpm_cmd_handler.c:78 code:
> TPM_BAD_PARAMETER.
> > ===
> >
> >  I start vtpmmgr-stubdom with vtpmmgr.cfg as below:
> > ====
> > kernel="/usr/lib/xen/boot/vtpmmgr-stubdom.gz"
> > memory=16
> > disk=["file:/var/vtpmmgr-stubdom.img,hda,w"]
> > name="vtpmmgr"
> > iomem=["fed40,1"]
> > ====
> > It prints out with below:
> > =======
> > Parsing config from vtpmmgr.cfg
> > Daemon running with PID 2406
> > Xen Minimal OS!
> >   start_info: 0xa2000(VA)
> >     nr_pages: 0x1000
> >   shared_inf: 0xcd7b0000(MA)
> >      pt_base: 0xa5000(VA)
> > nr_pt_frames: 0x5
> >     mfn_list: 0x9a000(VA)
> >    mod_start: 0x0(VA)
> >      mod_len: 0
> >        flags: 0x0
> >     cmd_line:
> >   stack:      0x597e0-0x797e0
> > MM: Init
> >       _text: 0x0(VA)
> >      _etext: 0x39357(VA)
> >    _erodata: 0x45000(VA)
> >      _edata: 0x47c40(VA)
> > stack start: 0x597e0(VA)
> >        _end: 0x99e00(VA)
> >   start_pfn: ad
> >     max_pfn: 1000
> > Mapping memory range 0x400000 - 0x1000000 setting 0x0-0x45000 readonly
> > skipped 0x1000
> > MM: Initialise page allocator for b3000(b3000)-1000000(1000000)
> > MM: done
> > Demand map pfns at 1001000-2001001000.
> > Heap resides at 2001002000-4001002000.
> > Initialising timer interface
> > Initialising console ... done.
> > gnttab_table mapped at 0x1001000.
> > Initialising scheduler
> > Thread "Idle": pointer: 0x2001002050, stack: 0xd0000 Thread "xenstore":
> pointer: 0x2001002800, stack: 0xe0000 xenbus initialised on irq 1 mfn
> 0x1f1c9d Thread "shutdown": pointer: 0x2001002fb0, stack: 0xf0000 Dummy
> main: start_info=0x798e0 Thread "main": pointer: 0x2001003760, stack:
> 0x100000 "main"
> > Shutting down ()
> > Shutdown requested: 3
> > Thread "shutdown" exited.
> > INFO[VTPM]: Starting vTPM manager domain
> > INFO[VTPM]: Option: Using tpm_tis driver
> > ******************* BLKFRONT for device/vbd/768 **********
> >
> >
> > backend at /local/domain/0/backend/qdisk/1/768
> > Failed to read /local/domain/0/backend/qdisk/1/768/feature-barrier.
> > 32768 sectors of 512 bytes
> > **************************
> > blk_open(device/vbd/768) -> 3
> > ============= Init TPM BACK ================ Thread
> > "tpmback-listener": pointer: 0x20010043f0, stack: 0xf0000
> > ============= Init TPM TIS Driver ============== IOMEM Machine Base
> > Address: FED40000 Enabled Localities: 0
> > 1.2 TPM (device-id=0xB vendor-id = 15D1 rev-id = 10) TPM interface
> capabilities (0x800000ff):
> >         Command Ready Int Support
> >         Interrupt Edge Falling
> >         Interrupt Edge Rising
> >         Interrupt Level Low
> >         Interrupt Level High
> >         Locality Change Int Support
> >         Sts Valid Int Support
> >         Data Avail Int Support
> > tpm_tis_open() -> 4
> > INFO[TPM]: TPM_GetCapability
> > INFO[VTPM]: Hardware TPM:
> > INFO[VTPM]:  version: 1 2 3 17
> > INFO[VTPM]:  specLevel: 2
> > INFO[VTPM]:  errataRev: 2
> > INFO[VTPM]:  vendorID: IFX
> > INFO[VTPM]:  vendorSpecificSize: 5
> > INFO[VTPM]:  vendorSpecific: 0311000800
> > INFO[TPM]: TPM_GetCapability
> > INFO[TPM]: TPM_GetCapability
> > INFO[TPM]: TPM_GetCapability
> > INFO[TPM]: TPM_GetCapability
> > INFO[TPM]: TPM_GetCapability
> > INFO[TPM]: TPM_GetCapability
> > INFO[TPM]: TPM_GetRandom
> > INFO[TPM]: TPM_GetRandom
> > INFO[TPM]: TPM_OIAP
> > INFO[TPM]: Auth Session: 0x995ab1 opened by TPM_OIAP.
> > INFO[VTPM]: Loading disk image header
> > INFO[VTPM]: Unpacking storage key
> > INFO[TPM]: TPM_LoadKey
> > INFO[TPM]: Key Handle: 0x5ec1f7e opened by TPM_LoadKey
> > INFO[VTPM]: Unbinding uuid table symmetric key
> > INFO[TPM]: TPM_UnBind
> > INFO[VTPM]: Waiting for commands from vTPM's:
> > ======
> >
> > I start vtpm-stbdom with below:
> > kernel="/usr/lib/xen/boot/vtpm-stubdom.gz"
> > memory=8
> > disk=["file:/root/img/vtpm.img,hda,w"]
> > name="domu-vtpm"
> > vtpm=["backend=vtpmmgr,uuid=69743ae0-9d4a-4ad6-9819-e602085b6792"]
> >
> > and print out:
> > ======
> > Parsing config from vtpm.cfg
> > Daemon running with PID 2618
> > Xen Minimal OS!
> >   start_info: 0xf0000(VA)
> >     nr_pages: 0x800
> >   shared_inf: 0xdc0e4000(MA)
> >      pt_base: 0xf3000(VA)
> > nr_pt_frames: 0x5
> >     mfn_list: 0xec000(VA)
> >    mod_start: 0x0(VA)
> >      mod_len: 0
> >        flags: 0x0
> >     cmd_line:
> >   stack:      0xab1e0-0xcb1e0
> > MM: Init
> >       _text: 0x0(VA)
> >      _etext: 0x7e647(VA)
> >    _erodata: 0x93000(VA)
> >      _edata: 0x95a80(VA)
> > stack start: 0xab1e0(VA)
> >        _end: 0xeb800(VA)
> >   start_pfn: fb
> >     max_pfn: 800
> > Mapping memory range 0x400000 - 0x800000 setting 0x0-0x93000 readonly
> > skipped 0x1000
> > MM: Initialise page allocator for fd000(fd000)-800000(800000)
> > MM: done
> > Demand map pfns at 801000-2000801000.
> > Heap resides at 2000802000-4000802000.
> > Initialising timer interface
> > Initialising console ... done.
> > gnttab_table mapped at 0x801000.
> > Initialising scheduler
> > Thread "Idle": pointer: 0x2000802050, stack: 0x110000 Thread "xenstore":
> pointer: 0x2000802800, stack: 0x120000 xenbus initialised on irq 1 mfn
> 0x185e7d Thread "shutdown": pointer: 0x2000802fb0, stack: 0x130000
> Dummy main: start_info=0xcb2e0 Thread "main": pointer: 0x2000803760,
> stack: 0x140000 "main"
> > Shutting down ()
> > Shutdown requested: 3
> > Thread "shutdown" exited.
> > vtpm.c:425: Info: starting TPM Emulator (1.2.0.7-475)
> > vtpm.c:357: Info: Startup mode is `clear'
> > vtpm.c:387: Info: All PCRs initialized to default values
> > vtpm.c:391: Info: TPM Maintenance Commands disabled
> > vtpm.c:401: Info: Log level set to (null) ============= Init TPM BACK
> ================ Thread "tpmback-listener": pointer: 0x2000802fb0, stack:
> 0x130000 ============= Init TPM Front ================ Tpmfront:Info
> Waiting for backend connection..
> > Tpmfront:Info Backend Connected
> > Tpmfront:Info Initialization Completed successfully
> > vtpmblk.c:34: Info: Initializing persistent NVM storage
> >
> > ******************* BLKFRONT for device/vbd/768 **********
> >
> >
> > backend at /local/domain/0/backend/qdisk/2/768
> > Failed to read /local/domain/0/backend/qdisk/2/768/feature-barrier.
> > 16384 sectors of 512 bytes
> > **************************
> > blk_open(device/vbd/768) -> 3
> > vtpm.c:175: Info: VTPM Initializing
> >
> > tpm_cmd_handler.c:4113: Debug: tpm_emulator_init(1, 0x00000007)
> > vtpm_cmd.c:155: Info: Requesting Encryption key from backend
> > vtpm_cmd.c:164: Error: VTPM_LoadHashKey() failed with error code (3)
> > vtpm_cmd.c:175: Error: VTPM_LoadHashKey failed
> > tpm_data.c:120: Info: initializing TPM data to default values
> > tpm_startup.c:29: Info: TPM_Init()
> > tpm_testing.c:243: Info: TPM_SelfTestFull()
> > tpm_testing.c:39: Debug: tpm_test_prng()
> > tpm_testing.c:69: Debug: Monobit: 9922
> > tpm_testing.c:70: Debug: Poker:   17.6
> > tpm_testing.c:71: Debug: run_1:   2471, 2582
> > tpm_testing.c:72: Debug: run_2:   1364, 1259
> > tpm_testing.c:73: Debug: run_3:   616, 588
> > tpm_testing.c:74: Debug: run_4:   298, 331
> > tpm_testing.c:75: Debug: run_5:   139, 155
> > tpm_testing.c:76: Debug: run_6+:  163, 137
> > tpm_testing.c:77: Debug: run_34:  0
> > tpm_testing.c:111: Debug: tpm_test_sha1()
> > tpm_testing.c:157: Debug: tpm_test_hmac()
> > tpm_testing.c:184: Debug: tpm_test_rsa_EK()
> > tpm_testing.c:186: Debug: tpm_rsa_generate_key()
> > tpm_testing.c:191: Debug: testing endorsement key
> > tpm_testing.c:197: Debug: tpm_rsa_sign(RSA_SSA_PKCS1_SHA1)
> > tpm_testing.c:200: Debug: tpm_rsa_verify(RSA_SSA_PKCS1_SHA1)
> > tpm_testing.c:203: Debug: tpm_rsa_sign(RSA_SSA_PKCS1_DER)
> > tpm_testing.c:206: Debug: tpm_rsa_verify(RSA_SSA_PKCS1_DER)
> > tpm_testing.c:210: Debug: tpm_rsa_encrypt(RSA_ES_PKCSV15)
> > tpm_testing.c:214: Debug: tpm_rsa_decrypt(RSA_ES_PKCSV15)
> > tpm_testing.c:218: Debug: verify plain text
> > tpm_testing.c:221: Debug: tpm_rsa_encrypt(RSA_ES_OAEP_SHA1)
> > tpm_testing.c:225: Debug: tpm_rsa_decrypt(RSA_ES_OAEP_SHA1)
> > tpm_testing.c:229: Debug: verify plain text
> > tpm_testing.c:261: Info: Self-Test succeeded
> > tpm_startup.c:43: Info: TPM_Startup(1) ##################
> >
> >
> > Actually XSM is enabled, 'xl dmesg' can get below info:
> >
> > (XEN) XSM Framework v1.0.0 initialized
> > (XEN) Policy len  0x25bf, start at ffff83021dffd000.
> > (XEN) Flask:  Initializing.
> > (XEN) AVC INITIALIZED
> > (XEN) Flask: 128 avtab hash slots, 276 rules.
> > (XEN) Flask: 128 avtab hash slots, 276 rules.
> > (XEN) Flask:  3 users, 3 roles, 39 types, 1 bools
> > (XEN) Flask:  11 classes, 276 rules
> > (XEN) Flask:  Starting in permissive mode.
> >
> > Could you help me to fix it. Thanks in advance.
> >
> >
> >
> > Quan,Xu
> > Intel
> >
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-users
> 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.