[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v3 3/5] Qemu-Xen-vTPM: Register Xen stubdom vTPM frontend driver




> -----Original Message-----
> From: Stefano Stabellini [mailto:stefano.stabellini@xxxxxxxxxxxxx]
> Sent: Tuesday, January 20, 2015 1:19 AM
> To: Xu, Quan
> Cc: qemu-devel@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxx;
> stefano.stabellini@xxxxxxxxxxxxx
> Subject: Re: [v3 3/5] Qemu-Xen-vTPM: Register Xen stubdom vTPM frontend
> driver
> 
> On Tue, 30 Dec 2014, Quan Xu wrote:
> > +int vtpm_recv(struct XenDevice *xendev, uint8_t* buf, size_t *count)
> > +{
> > +    struct xen_vtpm_dev *vtpmdev = container_of(xendev, struct
> xen_vtpm_dev,
> > +                                                xendev);
> > +    struct tpmif_shared_page *shr = vtpmdev->shr;
> > +    unsigned int offset;
> > +
> > +    if (shr->state == TPMIF_STATE_IDLE) {
> > +        return -ECANCELED;
> > +    }
> > +
> > +    while (vtpm_status(vtpmdev) != VTPM_STATUS_IDLE) {
> > +        vtpm_aio_wait(vtpm_aio_ctx);
> > +    }
> 
> Is this really necessary to write this as a busy loop?
> I think you should write it as a proper aio callback for efficiency:
> QEMU is going to burn 100% of the cpu polling and not doing anything else!

For further check and test, it is unnecessary to implement a busy loop in 
vtpm_recv. I can remove
It in v4.
It is similar to Linux PV frontend driver. In theory, the aio at the end of 
vtpm_send makes 
This busy loop unnecessary. 


           -------------
          |             |
          |    vtpm     |
          |   Domain    |
          |             |
           -------------
                 ^
          (Send) ||
            .    ||
            .    ||(Recv)
          (Send) ||
                  v
           -------------
          |             |
          |    QEMU     |
          |    vtpm     |
          |  frontend   |
          |             |
           -------------

The aio of vTPM is to make Send/Recv in order.


-Quan


> 
> 
> > +    offset = sizeof(*shr) + 4*shr->nr_extra_pages;
> > +    memcpy(buf, offset + (uint8_t *)shr, shr->length);
> > +    *count = shr->length;
> > +
> > +    return 0;
> > +}

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.