[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 2/6] evtchn: convert domain event lock to an r/w one


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 27 May 2021 13:01:07 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SZWohDrMGQcVduKlbqEw/e22lgLvj4pTiJyrD26Vs48=; b=K38a5NcNvI/V+mvqWcObfSvfUTv2b1kTRJuQLUwmKoBlHUcLSFO3l4uLr2SFnIABw65Mi1nhWt7p587FFfG9V0ks7pEvqx//1e0nyFOqKffU+WvAumIPKBH+FDbknREoCwBYBNNh5ZbHrZ6bgaYixqwOKcdkgwmTjJ3rRDxvQsS0erecQpjWcQD6fQdQnKA4heqW3Ziwk30jWiyeAtJD3IxAlO86gpe32WSeLZV1Acuj9Oo1G56f3bowDMGmWLpodoeFw70vHwAVupmFyDcQBek+B8z/eJPQiQMFsPlCciASfiDAE2fgsJLc1fRRhes7djXOfutn1uVENLM/OGGWWg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KvlutgYzS68JsIgLEEarRyjEyNYbBTbGrEir6rQ2u+uSlFx9sIk0hwPNCPhmwHYQmJ7Rs+ewapuMey7uCDgFbrZprzsRzvO7Wj/kkn3BvcY9EzpSFL61SavdrVm3VlYpBrmcDDBmhieeohTHMcL+c2pMuXKbcM3dmT55Zdkns1OkKBttm+eFWkmLnvpwvjQ1/YiOtLBC9U5wD4UCwR5aHDsuuAeircbPFgVV9HSSIohLJjLeayh0JT/fP1yglIYV1q3ln3L+p/b/gjZVoPZjlS4AuyIa2XoelH2ZKddKsKvr6r1h26WPCDjxKdk99wpNmdZoHe5/bcZxJweQ+k1oMQ==
  • Authentication-results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Wei Liu <wl@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>
  • Delivery-date: Thu, 27 May 2021 11:01:40 +0000
  • Ironport-hdrordr: A9a23:qxN1Pq64iOA8urb+CgPXwDLXdLJyesId70hD6qkQc3FomwKj9/ xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBOPZqAEIcBeOlNK1u5 0AT0B/YueAcGSTj6zBkXWF+wBL+qj5zEiq792usUuEVWtRGsZdB58SMHfhLqVxLjM2Y6YRJd 6nyedsgSGvQngTZtTTPAh+YwCSz+e77a4PeHQ9dmYa1DU=
  • Ironport-sdr: tazONBhR5eMUobFwgRRgP4qfpiTY6E2N2WO0J+pW4QNCE0UJDVwVQ2kx1zou/WwABmBD0pS4pl l91eA9oGZcYB3hzs/npLmehCfkrKxbVfXWynevo/SwRH+9D6vH0E/zVCbMxQw7sGMQ+IjmJhsw SU6XAH+syOoKkOZGRD0rXZGAKjHcU8GBOD+oZwQUz5WvN2t8Zz0v2cFiszDfbUntL73rFyQ9eD 5S6dS3CXf03O6RcMgpLM8CiBYMVxtwB0gQeJbvf/AMi8FQzmdLFuGiri3610ju0QtbcBIsQTjV nzE=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, Jan 27, 2021 at 09:16:07AM +0100, Jan Beulich wrote:
> Especially for the use in evtchn_move_pirqs() (called when moving a vCPU
> across pCPU-s) and the ones in EOI handling in PCI pass-through code,
> serializing perhaps an entire domain isn't helpful when no state (which
> isn't e.g. further protected by the per-channel lock) changes.

I'm unsure this move is good from a performance PoV, as the operations
switched to use the lock in read mode is a very small subset, and then
the remaining operations get a performance penalty when compared to
using a plain spin lock.

> Unfortunately this implies dropping of lock profiling for this lock,
> until r/w locks may get enabled for such functionality.
> 
> While ->notify_vcpu_id is now meant to be consistently updated with the
> per-channel lock held, an extension applies to ECS_PIRQ: The field is
> also guaranteed to not change with the per-domain event lock held for
> writing. Therefore the link_pirq_port() call from evtchn_bind_pirq()
> could in principle be moved out of the per-channel locked regions, but
> this further code churn didn't seem worth it.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> @@ -1510,9 +1509,10 @@ int evtchn_destroy(struct domain *d)
>  {
>      unsigned int i;
>  
> -    /* After this barrier no new event-channel allocations can occur. */
> +    /* After this kind-of-barrier no new event-channel allocations can 
> occur. */
>      BUG_ON(!d->is_dying);
> -    spin_barrier(&d->event_lock);
> +    read_lock(&d->event_lock);
> +    read_unlock(&d->event_lock);

Don't you want to use write mode here to assure there are no read
users that have taken the lock before is_dying has been set, and thus
could make wrong assumptions?

As I understand the point of the barrier here is to ensure there are
no lockers carrier over from before is_dying has been set.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.