[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 1/8] ioreq-server: pre-series tidy up



>>> On 02.04.14 at 17:11, <paul.durrant@xxxxxxxxxx> wrote:
> @@ -485,7 +485,7 @@ static int hvm_set_ioreq_page(
>  
>      if ( (iorp->va != NULL) || d->is_dying )
>      {
> -        destroy_ring_for_helper(&iorp->va, iorp->page);
> +        destroy_ring_for_helper(&va, page);

This clearly isn't just tidying: The bug fix should at least be mentioned
in the description, but for the purposes of backporting it should
probably be submitted as a separate patch.

> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -46,10 +46,9 @@
>  #include <xen/iocap.h>
>  #include <public/hvm/ioreq.h>
>  
> -int hvm_buffered_io_send(ioreq_t *p)
> +int hvm_buffered_io_send(struct domain *d, const ioreq_t *p)
>  {
> -    struct vcpu *v = current;
> -    struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
> +    struct hvm_ioreq_page *iorp = &d->arch.hvm_domain.buf_ioreq;

This isn't a purely cosmetic change either, especially without an
ASSERT(current->domain == d). It looks to be correct with one minor
exception: There's a gdprintk() in this function, which - if you don't
expect the function to be called for the current domain only - needs to
be altered to not falsely print the current vCPU as subject anymore.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.