|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/6] ioreq-server: centralize access to ioreq structures
>>> On 04.03.14 at 12:40, Paul Durrant <paul.durrant@xxxxxxxxxx> wrote:
> @@ -232,20 +211,15 @@ static int hvmemul_do_io(
> vio->io_state = HVMIO_handle_mmio_awaiting_completion;
> break;
> case X86EMUL_UNHANDLEABLE:
> - /* If there is no backing DM, just ignore accesses */
> - if ( !has_dm )
> + rc = X86EMUL_RETRY;
> + if ( !hvm_send_assist_req(curr, p) )
> {
> rc = X86EMUL_OKAY;
> vio->io_state = HVMIO_none;
> }
> - else
> - {
> - rc = X86EMUL_RETRY;
> - if ( !hvm_send_assist_req(curr) )
> - vio->io_state = HVMIO_none;
> - else if ( p_data == NULL )
> - rc = X86EMUL_OKAY;
> - }
> + else if ( p_data == NULL )
> + rc = X86EMUL_OKAY;
> +
> break;
Is the rc value change here really intentional? Previously, when
!hvm_send_assist_req(), rc would end up being X86EMUL_RETRY,
while now it gets set to X86EMUL_OKAY. After all, there were
three different paths originally (setting rc to X86EMUL_OKAY
and/or setting vio->io_state to HVMIO_none, and you can't
express this with the bool_t return type of hvm_send_assist_req().
> +bool_t hvm_io_pending(struct vcpu *v)
> +{
> + ioreq_t *p;
> +
> + if ( !(p = get_ioreq(v)) )
I'd prefer if you used the call to get_ioreq() as initializer instead of
in a parenthesized assignment inside a conditional. But yes, it's a
matter of taste...
> @@ -1407,7 +1425,86 @@ void hvm_vcpu_down(struct vcpu *v)
> }
> }
>
> -bool_t hvm_send_assist_req(struct vcpu *v)
> +int hvm_buffered_io_send(ioreq_t *p)
const ioreq_t *?
> +{
> + struct vcpu *v = current;
"curr" please.
> + struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq;
> + buffered_iopage_t *pg = iorp->va;
> + buf_ioreq_t bp;
> + /* Timeoffset sends 64b data, but no address. Use two consecutive slots.
> */
> + int qw = 0;
> +
> + /* Ensure buffered_iopage fits in a page */
> + BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
> +
> + /*
> + * Return 0 for the cases we can't deal with:
> + * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
> + * - we cannot buffer accesses to guest memory buffers, as the guest
> + * may expect the memory buffer to be synchronously accessed
> + * - the count field is usually used with data_is_ptr and since we don't
> + * support data_is_ptr we do not waste space for the count field
> either
> + */
> + if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
> + return 0;
> +
> + bp.type = p->type;
> + bp.dir = p->dir;
> + switch ( p->size )
> + {
> + case 1:
> + bp.size = 0;
> + break;
> + case 2:
> + bp.size = 1;
> + break;
> + case 4:
> + bp.size = 2;
> + break;
> + case 8:
> + bp.size = 3;
> + qw = 1;
> + break;
> + default:
> + gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
> + return 0;
> + }
> +
> + bp.data = p->data;
> + bp.addr = p->addr;
> +
> + spin_lock(&iorp->lock);
> +
> + if ( (pg->write_pointer - pg->read_pointer) >=
> + (IOREQ_BUFFER_SLOT_NUM - qw) )
> + {
> + /* The queue is full: send the iopacket through the normal path. */
> + spin_unlock(&iorp->lock);
> + return 0;
> + }
> +
> + memcpy(&pg->buf_ioreq[pg->write_pointer % IOREQ_BUFFER_SLOT_NUM],
> + &bp, sizeof(bp));
Better be type safe using an assignment here?
> +
Line of only spaces.
> + if ( qw )
> + {
> + bp.data = p->data >> 32;
> + memcpy(&pg->buf_ioreq[(pg->write_pointer+1) % IOREQ_BUFFER_SLOT_NUM],
> + &bp, sizeof(bp));
> + }
> +
> + /* Make the ioreq_t visible /before/ write_pointer. */
> + wmb();
> + pg->write_pointer += qw ? 2 : 1;
> +
> + notify_via_xen_event_channel(v->domain,
> + v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]);
> + spin_unlock(&iorp->lock);
Perhaps worth caching v->domain into a local variable?
> +bool_t hvm_send_assist_req(struct vcpu *v, ioreq_t *proto_p)
const ioreq_t *
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |