[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] add tpm_xenu.ko: Xen Virtual TPM frontend driver



On Thu, Nov 08, 2012 at 08:17:32AM +0000, Jan Beulich wrote:
> >>> On 07.11.12 at 19:14, Matthew Fioravante <matthew.fioravante@xxxxxxxxxx> 
> >>> wrote:
> > On 11/07/2012 09:46 AM, Kent Yoder wrote:
> >>> --- a/drivers/char/tpm/tpm.h
> >>> +++ b/drivers/char/tpm/tpm.h
> >>> @@ -130,6 +130,9 @@ struct tpm_chip {
> >>>
> >>>           struct list_head list;
> >>>           void (*release) (struct device *);
> >>> +#if CONFIG_XEN
> >>> + void *priv;
> >>> +#endif
> >>   Can you use the chip->vendor.data pointer here instead? tpm_ibmvtpm is
> >> already using that as a priv pointer. I should probably change that name
> >> to make it more obvious what that's used for.
> > That makes more sense. I'm guessing your data pointer didn't exist 
> > during the 2.6.18 kernel which is why they added their own priv pointer.
> 
> It got introduced with 3.7-rc.
> 
> >>> @@ -310,6 +313,18 @@ struct tpm_cmd_t {
> >>>
> >>>   ssize_t tpm_getcap(struct device *, __be32, cap_t *, const char *);
> >>>
> >>> +#ifdef CONFIG_XEN
> >>> +static inline void *chip_get_private(const struct tpm_chip *chip)
> >>> +{
> >>> + return chip->priv;
> >>> +}
> >>> +
> >>> +static inline void chip_set_private(struct tpm_chip *chip, void *priv)
> >>> +{
> >>> + chip->priv = priv;
> >>> +}
> >>> +#endif
> >>   Can you put these in tpm_vtpm.c please?  One less #define. :-)
> > Agreed, I'd rather not have to modify your shared tpm.h interface at all.
> 
> Either such accessors should be defined here, for everyone to
> use (and tpm_ibmvtpm.c get changed accordingly), or the Xen
> code should access the field without wrappers too (for consistency).

  Agreed. I'll update tpm_ibmvtpm.

Kent

> 
> Jan
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.