[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 12/18] xen/pvcalls: implement poll command
On Tue, 20 Jun 2017, Boris Ostrovsky wrote: > > @@ -499,6 +521,55 @@ static int pvcalls_back_accept(struct xenbus_device > > *dev, > > static int pvcalls_back_poll(struct xenbus_device *dev, > > struct xen_pvcalls_request *req) > > { > > + struct pvcalls_fedata *fedata; > > + struct sockpass_mapping *mappass; > > + struct xen_pvcalls_response *rsp; > > + struct inet_connection_sock *icsk; > > + struct request_sock_queue *queue; > > + unsigned long flags; > > + int ret; > > + bool data; > > + > > + fedata = dev_get_drvdata(&dev->dev); > > + > > + mappass = radix_tree_lookup(&fedata->socketpass_mappings, > > req->u.poll.id); > > + if (mappass == NULL) > > + return -EINVAL; > > + > > + /* > > + * Limitation of the current implementation: only support one > > + * concurrent accept or poll call on one socket. > > + */ > > + spin_lock_irqsave(&mappass->copy_lock, flags); > > + if (mappass->reqcopy.cmd != 0) { > > + ret = -EINTR; > > + goto out; > > + } > > + > > + mappass->reqcopy = *req; > > + icsk = inet_csk(mappass->sock->sk); > > + queue = &icsk->icsk_accept_queue; > > + spin_lock(&queue->rskq_lock); > > + data = queue->rskq_accept_head != NULL; > > + spin_unlock(&queue->rskq_lock); > > What is the purpose of the queue lock here? It is only there to protect accesses to rskq_accept_head. Functions that change rskq_accept_head take this lock, see for example net/ipv4/inet_connection_sock.c:inet_csk_reqsk_queue_add. I'll add an in-code comment. > > + if (data) { > > + mappass->reqcopy.cmd = 0; > > + ret = 0; > > + goto out; > > + } > > + spin_unlock_irqrestore(&mappass->copy_lock, flags); > > + > > + /* Tell the caller we don't need to send back a notification yet */ > > + return -1; _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |