|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/5] vTPM: event channel bind interdomain with para/hvm virtual machine
Also CC Stefan Berger.
> >> This is my current understanding of the communications paths and
> >> support for vTPMs in Xen:
> >>
> >> Physical TPM (1.2; with new patches, may also be 2.0)
> >> |
> >> [MMIO pass-through]
> >> |
> >> vtpmmgr domain
> >> |
> >> [minios tpmback/front] ----- ((other domains' vTPMs))
> >> |
> >> vTPM domain (currently always emulates a TPM v1.2)
> >> |
> >> [minios tpmback]+----[Linux tpmfront]-- PV Linux domain (fully working)
> >> | \
> >> | +--[Linux tpmfront]-- HVM Linux with optional PV
> >> drivers
> >> | \
> >> [QEMU XenDevOps] [minios or Linux tpmfront]
> >> | |
> >> QEMU dom0 process QEMU stub-domain
> >> | |
> >> [MMIO emulation] [MMIO emulation]
> >> | |
> >> Any HVM guest Any HVM guest
> >>
> >
> > Great, good architecture. The following part is not put into account in my
> previous design.
> >
> > [minios or Linux tpmfront]
> > |
> > QEMU stub-domain
> > |
> > [MMIO emulation]
> > |
> > Any HVM guest
> >
> > Thanks Graaf for sharing your design.
> >>
> >> The series you are sending will enable QEMU to talk to tpmback directly.
> >> This is the best solution when QEMU is running inside domain 0,
> >> because it is not currently a good idea to use Linux's tpmfront
> >> driver to talk to each guest's vTPM domain.
> >>
> >> When QEMU is run inside a stub domain, there are a few more things to
> >> consider:
> >>
> >> * This stub domain will not have domain 0; the vTPM must bind to
> >> another
> >> domain ID.
> >> * It is possible to use the native TPM driver for the stub domain
> >> (which may
> >> either run Linux or mini-os) because there is no conflict with a real
> >> TPM
> >> software stack running inside domain 0
> >>
> >> Supporting this feature requires more granularity in the TPM backend
> >> changes.
> >> The vTPM domain's backend must be able to handle:
> >>
> >> (1) guest domains which talk directly to the vTPM on their own behalf
> >> (2) QEMU processes in domain 0
> >> (3) QEMU domains which talk directly to the vTPM on behalf of a
> >> guest
> >>
> >> Cases (1) and (3) are already handled by the existing tpmback if the
> >> proper domain ID is used.
> >>
> >> Your patch set currently breaks case (1) and (3) for HVM guests while
> >> enabling case (2). An alternate solution that does not break these
> >> cases while enabling case (2) is preferable.
> >>
> >> My thoughts on extending the xenstore interface via an example:
> >>
> >> Domain 0: runs QEMU for guest A
> >> Domain 1: vtpmmgr
> >> Domain 2: vTPM for guest A
> >> Domain 3: HVM guest A
> >>
> >> Domain 4: vTPM for guest B
> >> Domain 5: QEMU stubdom for guest B
> >> Domain 6: HVM guest B
> >>
> >> /local/domain/2/backend/vtpm/3/0/*: backend A-PV
> >> /local/domain/3/device/vtpm/0/*: frontend A-PV
> >>
> >> /local/domain/2/backend/vtpm/0/3/*: backend A-QEMU
> >> /local/domain/0/qemu-device/vtpm/3/*: frontend A-QEMU (uses
> >> XenDevOps)
> >
> > I think '/local/domain/0/frontend/vtpm/3/0' is much better. Similar as
> > some backend in Qemu running in Domain-0, it always Stores as
> '/local/domain/0/backend/qdisk/1 .etc'. I will also modify QEMU code to make
> '/local/domain/0/frontend/DEVICE'
> > As a general design for general QEMU frontend running in Domain-0.
> >
> > For this example,
> > Domain 0: runs QEMU for guest A
> > Domain 1: vtpmmgr
> > Domain 2: vTPM for guest A
> > Domain 3: HVM guest A
> >
> > I will design XenStore as following:
> >
> > ## XenStore >> ###
> > local = ""
> > domain = ""
> > 0 = ""
> > frontend = ""
> > vtpm = ""
> > 3 = ""
> > 0 = ""
> > backend = "/local/domain/2/backend/vtpm/3/0"
> > backend-id = "2"
> > state = "*"
> > handle = "0"
> > ring-ref = "*"
> > event-channel = "*"
> > feature-protocol-v2 = "1"
> > backend = ""
> > qdisk = ""
> > [...]
> > console = ""
> > vif = ""
> > [...]
> > 2 = ""
> > [...]
> > backend = ""
> > vtpm = ""
> > 3 = ""
> > 0 = ""
> > frontend = "/local/domain/0/frontend/vtpm/3/0"
> > frontend-id = "0" ('0', frontend is running in Domain-0)
> > [...]
> > 3 = ""
> > [...]
> > device = "" (frontend device, the backend is running in QEMU/.etc)
> > vkbd = ""
> > [...]
> > vif = ""
> > [...]
> > ## XenStore << ##
> >
> > Then, the source code can read xenStore to get frontend-id or frontend
> directly.
> > If you agree with it, I will modify source code to align with above XenStore
> design.
>
> I like the /local/domain/0/frontend/* path better than my initial qemu
> suggestion, but I think the domain ID used should be the domain ID of the vTPM
> domain, similar to how backends for the qemu stubdom are done. In this
> example, the paths would be "/local/domain/0/frontend/vtpm/2/0" and
> "/local/domain/2/backend/vtpm/0/0".
> This avoids introducing a dependency on the domain ID of the guest in a
> connection that does not directly involve that domain. If a guest ever needs
> two vTPMs or multiple guests share a vTPM, this method of constructing the
> paths will avoid unneeded conflicts (though I don't expect either of these
> situations to be normal).
Agree with you. I will send out v3 based on it.
For this example,
Domain 0: runs QEMU for guest A
Domain 1: vtpmmgr
Domain 2: vTPM for guest A
Domain 3: HVM guest A
I will design XenStore as following:
## XenStore >> ###
local = ""
domain = ""
0 = ""
frontend = ""
vtpm = ""
2 = ""
0 = ""
backend = "/local/domain/2/backend/vtpm/0/0"
backend-id = "2"
state = "*"
handle = "0"
domain = "Domain3's name"
ring-ref = "*"
event-channel = "*"
feature-protocol-v2 = "1"
backend = ""
qdisk = ""
[...]
console = ""
vif = ""
[...]
2 = ""
[...]
backend = ""
vtpm = ""
0 = ""
0 = ""
frontend = "/local/domain/0/frontend/vtpm/2/0"
frontend-id = "0" ('0', frontend is running in Domain-0)
[...]
3 = ""
[...]
device = "" (frontend device, the backend is running in QEMU/.etc)
vkbd = ""
[...]
vif = ""
[...]
## XenStore << ##
Add [domain = "Domain3's name"] under /local/domain/0/frontend/vtpm/2/0.
Then, 'xs_directory()' can help to find out which domain is the backend. It can
be
of low efficiency.
-Quan
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |