|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 10/15] argo: implement the notify op
On Thu, Jan 10, 2019 at 4:22 AM Roger Pau Monné <royger@xxxxxxxxxxx> wrote:
>
> On Mon, Jan 7, 2019 at 8:44 AM Christopher Clark
> <christopher.w.clark@xxxxxxxxx> wrote:
> >
> > Queries for data about space availability in registered rings and
> > causes notification to be sent when space has become available.
> >
> > The hypercall op populates a supplied data structure with information about
> > ring state, and if insufficent space is currently available in a given ring,
>
> insufficient
ack, fixed
> > the hypervisor will record the domain's expressed interest and notify it
> > when it observes that space has become available.
> >
> > Checks for free space occur when this notify op is invoked, so it may be
> > intentionally invoked with no data structure to populate
> > (ie. a NULL argument) to trigger such a check and consequent notifications.
> >
> > Limit the maximum number of notify requests in a single operation to a
> > simple fixed limit of 256.
> >
> > +static struct argo_ring_info *
> > +ring_find_info(const struct domain *d, const struct argo_ring_id *id);
> > +
> > +static struct argo_ring_info *
> > +ring_find_info_by_match(const struct domain *d, uint32_t port,
> > + domid_t partner_id);
>
> Can you place the static functions such that you don't need prototypes for
> them?
Ack, yes. These have now been pulled to the top of the file and the
prototypes removed.
> > +
> > /*
> > * This hash function is used to distribute rings within the per-domain
> > * hash tables (d->argo->ring_hash and d->argo_send_hash). The hash table
> > @@ -265,6 +275,17 @@ signal_domain(struct domain *d)
> > }
> >
> > static void
> > +signal_domid(domid_t domain_id)
> > +{
> > + struct domain *d = get_domain_by_id(domain_id);
>
> Newline.
ack
>
> > + if ( !d )
> > + return;
> > +
> > + signal_domain(d);
> > + put_domain(d);
> > +}
> > +
> > +static void
> > ring_unmap(struct argo_ring_info *ring_info)
> > {
> > unsigned int i;
> > @@ -473,6 +494,62 @@ get_sanitized_ring(xen_argo_ring_t *ring, struct
> > argo_ring_info *ring_info)
> > return 0;
> > }
> >
> > +static uint32_t
> > +ringbuf_payload_space(struct domain *d, struct argo_ring_info *ring_info)
> > +{
> > + xen_argo_ring_t ring;
> > + uint32_t len;
> > + int32_t ret;
>
> You use a signed type to internally store the return value, but the
> return type from the function itself is unsigned. Is it guaranteed
> that ret < INT32_MAX always?
It is, yes, so I've added a explanatory comment:
/*
* In a sanitized ring, we can rely on:
* (rx_ptr < ring_info->len) &&
* (tx_ptr < ring_info->len) &&
* (ring_info->len <= XEN_ARGO_MAX_RING_SIZE)
*
* and since: XEN_ARGO_MAX_RING_SIZE < INT32_MAX
* therefore right here: ret < INT32_MAX
* and we are safe to return it as a unsigned value from this function.
* The subtractions below cannot increase its value.
*/
>
> > +
> > + ASSERT(spin_is_locked(&ring_info->lock));
> > +
> > + len = ring_info->len;
> > + if ( !len )
> > + return 0;
> > +
> > + ret = get_sanitized_ring(&ring, ring_info);
> > + if ( ret )
> > + return 0;
> > +
> > + argo_dprintk("sanitized ringbuf_payload_space: tx_ptr=%d rx_ptr=%d\n",
> > + ring.tx_ptr, ring.rx_ptr);
> > +
> > + /*
> > + * rx_ptr == tx_ptr means that the ring has been emptied, so return
> > + * the maximum payload size that can be accepted -- see message size
> > + * checking logic in the entry to ringbuf_insert which ensures that
> > + * there is always one message slot (of size ROUNDUP_MESSAGE(1)) left
> > + * available, preventing a ring from being entirely filled. This
> > ensures
> > + * that matching ring indexes always indicate an empty ring and not a
> > + * full one.
> > + * The subtraction here will not underflow due to minimum size
> > constraints
> > + * enforced on ring size elsewhere.
> > + */
> > + if ( ring.rx_ptr == ring.tx_ptr )
> > + return len - sizeof(struct xen_argo_ring_message_header)
> > + - ROUNDUP_MESSAGE(1);
>
> Why not do something like:
>
> ret = ring.rx_ptr - ring.tx_ptr;
> if ( ret <= 0)
> ret += len;
>
> Instead of this early return?
>
> The only difference when the ring is full is that len should be used
> instead of the ptr difference.
That's much nicer -- thanks. Done.
> >
> > static int
> > +fill_ring_data(const struct domain *currd,
> > + XEN_GUEST_HANDLE(xen_argo_ring_data_ent_t) data_ent_hnd)
> > +{
> > + xen_argo_ring_data_ent_t ent;
> > + struct domain *dst_d;
> > + struct argo_ring_info *ring_info;
> > + int ret;
> > +
> > + ASSERT(rw_is_locked(&argo_lock));
> > +
> > + ret = __copy_from_guest(&ent, data_ent_hnd, 1) ? -EFAULT : 0;
> > + if ( ret )
> > + goto out;
>
> if ( __copy_from_guest(&ent, data_ent_hnd, 1) )
> return -EFAULT;
>
> And you can get rid of the out label.
ack, done.
>
> > +
> > + argo_dprintk("fill_ring_data: ent.ring.domain=%u,ent.ring.port=%x\n",
> > + ent.ring.domain_id, ent.ring.port);
> > +
> > + ent.flags = 0;
>
> Please memset ent to 0 or initialize it to { }, or else you are
> leaking hypervisor stack data to the guest in the padding field.
ok - I've added the initializer, thanks.
Was it really leaking stack data though because the struct should have
been fully populated, including the padding field, with the
__copy_from_guest above?
> > +static void
> > +notify_check_pending(struct domain *currd)
> > +{
> > + unsigned int i;
> > + HLIST_HEAD(to_notify);
> > +
> > + ASSERT(rw_is_locked(&argo_lock));
> > +
> > + read_lock(&currd->argo->lock);
> > +
> > + for ( i = 0; i < ARGO_HTABLE_SIZE; i++ )
> > + {
> > + struct hlist_node *node, *next;
> > + struct argo_ring_info *ring_info;
> > +
> > + hlist_for_each_entry_safe(ring_info, node, next,
> > + &currd->argo->ring_hash[i], node)
> > + {
> > + notify_ring(currd, ring_info, &to_notify);
> > + }
>
> No need for the braces.
Fixed - thanks.
Christopher
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |