[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [Xen-users] some problems to start vTPM vtpm-stubdom
> -----Original Message----- > From: Daniel De Graaf [mailto:dgdegra@xxxxxxxxxxxxx] > Sent: Tuesday, July 02, 2013 5:39 AM > To: Xu, Quan > Cc: Ian Campbell; xen-users@xxxxxxxxxxxxx > Subject: Re: [Xen-users] some problems to start vTPM vtpm-stubdom > > On 06/25/2013 07:52 AM, Xu, Quan wrote: > >> From: Ian Campbell [mailto:Ian.Campbell@xxxxxxxxxx] > >> Sent: Tuesday, June 25, 2013 5:16 PM > >> To: Xu, Quan > >> Cc: xen-users@xxxxxxxxxxxxx; Daniel De Graaf > >> Subject: Re: [Xen-users] some problems to start vTPM vtpm-stubdom > >> > >> On Thu, 2013-06-20 at 03:18 +0000, Xu, Quan wrote: > >>> Hi community, > >>> there are some problems to start vTPM vtpm-stubdom following > >>> docs/misc/vtpm.txt. > >> > >> You might have better luck getting help with your problems if you CC > >> the vTPM maintainer as listed in the MAINTAINERS file in the source > >> tree. I have added Daniel here now. > > > > Campbell, > > Thanks in advance. :) > > More resource will focus on it, my team will try to enable below 3 > topics: > > 1. enable xen vTPM to allow programs to interact with a TPM in a virtual > machine, the same way they interact with a TPM on the physical system. > > This should be working for Linux domains (PV&HVM) with the PV driver for > the vTPM. > > > 2. intergrate xen vTPM in openstack cloud. virtual machine in OpenStack can > work with Xen vTPM. > > 3. promote TPM 2.0 in Xen. Xen vTPM can run on TPM 2.0. > > Just curious: do you mean using a hardware TPM 2.0, emulating a TPM 2.0, or > both? > Plan to use both a hardware TPM 2.0 and emulating a TPM 2.0. I know some workmates are developing TPM 2.0 driver. > >> > >>> When I start vtpm-stbdom, the vtpmmgr-stubdom will print out: > >>> === > >>> ERROR[VTPM]: LoadKey failure: Unrecognized uuid! > >>> 69743ae0-9d4a-4ad6-9819-e602085b6792 > > This is just a message with a bad priority, assuming it's the first time you > have > started this particular vTPM. Once the vTPM has run SaveHashKey, this should > not appear again for that UUID. > > Eventually the TPM Manager will have a management interface used to create > vTPMs, which can be used to provide evidence that a given vTPM's secrets > were created and only available in a given list of configurations. > > >>> ERROR[VTPM]: Failed to load key > >>> ERROR in vtpmmgr_LoadHashKey at vtpm_cmd_handler.c:78 code: > >> TPM_BAD_PARAMETER. > >>> === > >>> > [...] > >>> tpm_cmd_handler.c:4113: Debug: tpm_emulator_init(1, 0x00000007) > >>> vtpm_cmd.c:155: Info: Requesting Encryption key from backend > >>> vtpm_cmd.c:164: Error: VTPM_LoadHashKey() failed with error code (3) > >>> vtpm_cmd.c:175: Error: VTPM_LoadHashKey failed > > Same error source here; the vTPM will generate new keys and save data once > any command has been processed. First, I will clean TPM ownership by BIOS. If the /var/vtpmmgr-stubdom.img is created by below cmd, dd if=/dev/zero of=/var/vtpmmgr-stubdom.img bs=16M count=1 it will always print out: ~~~ ERROR[VTPM]: Invalid ID string in disk image! ERROR[VTPM]: Failed to load manager data! ~~~ So I use tty linux image instead. Also I will change UUID of tty linux image by below cmd: uuidgen | xargs -i tune2fs /var/vtpmmgr-stubdom.img -U {} now there is another error (ERROR[TPM]: Failed with return code TPM_NOSRK). I will continue focus on fixing it. I think I should create SRK manually by tools. do you have some suggestion? The below is info when I start vtpmmgr with tty linux image(UUID changed): Parsing config from vtpmmgr.cfg Daemon running with PID 1813 Xen Minimal OS! start_info: 0xa3000(VA) nr_pages: 0x1000 shared_inf: 0xdf3d3000(MA) pt_base: 0xa6000(VA) nr_pt_frames: 0x5 mfn_list: 0x9b000(VA) mod_start: 0x0(VA) mod_len: 0 flags: 0x0 cmd_line: stack: 0x5a7a0-0x7a7a0 MM: Init _text: 0x0(VA) _etext: 0x39854(VA) _erodata: 0x46000(VA) _edata: 0x48c00(VA) stack start: 0x5a7a0(VA) _end: 0x9adc0(VA) start_pfn: ae max_pfn: 1000 Mapping memory range 0x400000 - 0x1000000 setting 0x0-0x46000 readonly skipped 0x1000 MM: Initialise page allocator for b4000(b4000)-1000000(1000000) MM: done Demand map pfns at 1001000-2001001000. Heap resides at 2001002000-4001002000. Initialising timer interface Initialising console ... done. gnttab_table mapped at 0x1001000. Initialising scheduler Thread "Idle": pointer: 0x2001002050, stack: 0xd0000 Thread "xenstore": pointer: 0x2001002800, stack: 0xe0000 xenbus initialised on irq 1 mfn 0x207380 Thread "shutdown": pointer: 0x2001002fb0, stack: 0xf0000 Dummy main: start_info=0x7a8a0 Thread "main": pointer: 0x2001003760, stack: 0x100000 "main" Shutting down () Shutdown requested: 3 Thread "shutdown" exited. INFO[VTPM]: Starting vTPM manager domain INFO[VTPM]: Option: Using tpm_tis driver ******************* BLKFRONT for device/vbd/768 ********** backend at /local/domain/0/backend/qdisk/1/768 Failed to read /local/domain/0/backend/qdisk/1/768/feature-barrier. 49152 sectors of 512 bytes ************************** blk_open(device/vbd/768) -> 3 ============= Init TPM BACK ================ Thread "tpmback-listener": pointer: 0x20010043f0, stack: 0xf0000 ============= Init TPM TIS Driver ============== IOMEM Machine Base Address: FED40000 Enabled Localities: 0 1.2 TPM (device-id=0xB vendor-id = 15D1 rev-id = 10) TPM interface capabilities (0x800000ff): Command Ready Int Support Interrupt Edge Falling Interrupt Edge Rising Interrupt Level Low Interrupt Level High Locality Change Int Support Sts Valid Int Support Data Avail Int Support tpm_tis_open() -> 4 INFO[TPM]: TPM_GetCapability INFO[VTPM]: Hardware TPM: INFO[VTPM]: version: 1 2 3 17 INFO[VTPM]: specLevel: 2 INFO[VTPM]: errataRev: 2 INFO[VTPM]: vendorID: IFX INFO[VTPM]: vendorSpecificSize: 5 INFO[VTPM]: vendorSpecific: 0311000800 INFO[TPM]: TPM_GetCapability INFO[TPM]: TPM_GetCapability INFO[TPM]: TPM_GetCapability INFO[TPM]: TPM_GetCapability INFO[TPM]: TPM_GetCapability INFO[TPM]: TPM_GetCapability INFO[TPM]: TPM_GetRandom INFO[TPM]: TPM_GetRandom INFO[TPM]: TPM_OIAP INFO[TPM]: Auth Session: 0xea2f96 opened by TPM_OIAP. INFO[VTPM]: Loading disk image header INFO[VTPM]: Unpacking storage key INFO[TPM]: TPM_LoadKey ERROR[TPM]: Failed with return code TPM_NOSRK INFO[TPM]: Auth Session: 0xea2f96 closed by TPM ERROR in vtpm_storage_load_header at vtpm_storage.c:631 code: TPM_NOSRK. ERROR[VTPM]: Failed to load manager data! INFO[TPM]: TPM_OIAP INFO[TPM]: Auth Session: 0xecf539 opened by TPM_OIAP. INFO[VTPM]: Failed to read manager file. Assuming first time initialization. INFO[TPM]: TPM_ReadPubek INFO[TPM]: TPM_TakeOwnership INFO[TPM]: TPM_DisablePubekRead INFO[TPM]: TPM_OSAP INFO[TPM]: Auth Session: 0x1dcef23 opened by TPM_OSAP. INFO[TPM]: TPM_CreateWrapKey INFO[TPM]: Auth Session: 0x1dcef23 closed by TPM INFO[TPM]: TPM_LoadKey INFO[TPM]: Key Handle: 0x5bb011a opened by TPM_LoadKey INFO[TPM]: TPM_SaveState INFO[VTPM]: Creating new disk image header INFO[VTPM]: Saving root storage key.. INFO[VTPM]: Binding uuid table symmetric key.. INFO[TPM]: TPM_Bind INFO[VTPM]: Saved new manager disk header. INFO[VTPM]: Finished initialized new VTPM manager INFO[VTPM]: Waiting for commands from vTPM's: > > >>> tpm_data.c:120: Info: initializing TPM data to default values > >>> tpm_startup.c:29: Info: TPM_Init() > >>> tpm_testing.c:243: Info: TPM_SelfTestFull() > >>> tpm_testing.c:39: Debug: tpm_test_prng() > >>> tpm_testing.c:69: Debug: Monobit: 9922 > >>> tpm_testing.c:70: Debug: Poker: 17.6 > >>> tpm_testing.c:71: Debug: run_1: 2471, 2582 > >>> tpm_testing.c:72: Debug: run_2: 1364, 1259 > >>> tpm_testing.c:73: Debug: run_3: 616, 588 > >>> tpm_testing.c:74: Debug: run_4: 298, 331 > >>> tpm_testing.c:75: Debug: run_5: 139, 155 > >>> tpm_testing.c:76: Debug: run_6+: 163, 137 > >>> tpm_testing.c:77: Debug: run_34: 0 > >>> tpm_testing.c:111: Debug: tpm_test_sha1() > >>> tpm_testing.c:157: Debug: tpm_test_hmac() > >>> tpm_testing.c:184: Debug: tpm_test_rsa_EK() > >>> tpm_testing.c:186: Debug: tpm_rsa_generate_key() > >>> tpm_testing.c:191: Debug: testing endorsement key > >>> tpm_testing.c:197: Debug: tpm_rsa_sign(RSA_SSA_PKCS1_SHA1) > >>> tpm_testing.c:200: Debug: tpm_rsa_verify(RSA_SSA_PKCS1_SHA1) > >>> tpm_testing.c:203: Debug: tpm_rsa_sign(RSA_SSA_PKCS1_DER) > >>> tpm_testing.c:206: Debug: tpm_rsa_verify(RSA_SSA_PKCS1_DER) > >>> tpm_testing.c:210: Debug: tpm_rsa_encrypt(RSA_ES_PKCSV15) > >>> tpm_testing.c:214: Debug: tpm_rsa_decrypt(RSA_ES_PKCSV15) > >>> tpm_testing.c:218: Debug: verify plain text > >>> tpm_testing.c:221: Debug: tpm_rsa_encrypt(RSA_ES_OAEP_SHA1) > >>> tpm_testing.c:225: Debug: tpm_rsa_decrypt(RSA_ES_OAEP_SHA1) > >>> tpm_testing.c:229: Debug: verify plain text > >>> tpm_testing.c:261: Info: Self-Test succeeded > >>> tpm_startup.c:43: Info: TPM_Startup(1) ################## > >>> > >>> > >>> Actually XSM is enabled, 'xl dmesg' can get below info: > > XSM is not a requirement for using the vTPM domains, although it is helpful to > provide isolation of the keys contained in the vTPM. > When I try to disable XSM, vtpmmgr is always shutdown with below info, so I always enable XSM. Parsing config from vtpmmgr.cfg Daemon running with PID 1786 Xen Minimal OS! start_info: 0xa3000(VA) nr_pages: 0x1000 shared_inf: 0xdf3d6000(MA) pt_base: 0xa6000(VA) nr_pt_frames: 0x5 mfn_list: 0x9b000(VA) mod_start: 0x0(VA) mod_len: 0 flags: 0x0 cmd_line: stack: 0x5a7a0-0x7a7a0 MM: Init _text: 0x0(VA) _etext: 0x39854(VA) _erodata: 0x46000(VA) _edata: 0x48c00(VA) stack start: 0x5a7a0(VA) _end: 0x9adc0(VA) start_pfn: ae max_pfn: 1000 Mapping memory range 0x400000 - 0x1000000 setting 0x0-0x46000 readonly skipped 0x1000 MM: Initialise page allocator for b4000(b4000)-1000000(1000000) MM: done Demand map pfns at 1001000-2001001000. Heap resides at 2001002000-4001002000. Initialising timer interface Initialising console ... done. gnttab_table mapped at 0x1001000. Initialising scheduler Thread "Idle": pointer: 0x2001002050, stack: 0xd0000 Thread "xenstore": pointer: 0x2001002800, stack: 0xe0000 xenbus initialised on irq 1 mfn 0x207071 Thread "shutdown": pointer: 0x2001002fb0, stack: 0xf0000 Dummy main: start_info=0x7a8a0 Thread "main": pointer: 0x2001003760, stack: 0x100000 "main" Shutting down () Shutdown requested: 3 Thread "shutdown" exited. INFO[VTPM]: Starting vTPM manager domain INFO[VTPM]: Option: Using tpm_tis driver ******************* BLKFRONT for device/vbd/768 ********** backend at /local/domain/0/backend/qdisk/1/768 Failed to read /local/domain/0/backend/qdisk/1/768/feature-barrier. 32768 sectors of 512 bytes ************************** blk_open(device/vbd/768) -> 3 ============= Init TPM BACK ================ Thread "tpmback-listener": pointer: 0x20010043f0, stack: 0xf0000 ============= Init TPM TIS Driver ============== IOMEM Machine Base Address: FED40000 Enabled Localities: 0 Map 1 (fed40, ...) at 0x1006000 failed: -1. Do_exit called! base is 0x10fcb8 caller is 0x1f0ea base is 0x10fcd8 caller is 0x284e3 base is 0x10fd88 caller is 0x285b8 base is 0x10fde8 caller is 0x270cc base is 0x10fe28 caller is 0x270e4 base is 0x10fe38 caller is 0x1bcc9 base is 0x10fe78 caller is 0x6ffc base is 0x10ff38 caller is 0x3545 base is 0x10ff68 caller is 0x1fc1c base is 0x10ffe8 caller is 0x343b > >>> > >>> (XEN) XSM Framework v1.0.0 initialized > >>> (XEN) Policy len 0x25bf, start at ffff83021dffd000. > >>> (XEN) Flask: Initializing. > >>> (XEN) AVC INITIALIZED > >>> (XEN) Flask: 128 avtab hash slots, 276 rules. > >>> (XEN) Flask: 128 avtab hash slots, 276 rules. > >>> (XEN) Flask: 3 users, 3 roles, 39 types, 1 bools > >>> (XEN) Flask: 11 classes, 276 rules > >>> (XEN) Flask: Starting in permissive mode. > >>> > >>> Could you help me to fix it. Thanks in advance. > >>> > >>> > >>> > >>> Quan,Xu > >>> Intel > >>> > >>> > >>> > >>> _______________________________________________ > >>> Xen-users mailing list > >>> Xen-users@xxxxxxxxxxxxx > >>> http://lists.xen.org/xen-users > >> > > > > > > > -- > Daniel De Graaf > National Security Agency Quan,Xu Intel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |