[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] Fix blkback/blktap sysfs read bug.
On 2010-01-20 18:16, Daniel Stodden wrote: > > Hey. > > I think reverted a similar patch when moving past 2.6.18 a while ago, > but I don't remember the first kernel versions affected right now. > > On Wed, 2010-01-20 at 06:45 -0500, Joe Jin wrote: > > > static void connect(struct backend_info *); > > static int connect_ring(struct backend_info *); > > @@ -122,10 +123,19 @@ > > struct device_attribute *attr, \ > > char *buf) \ > > { \ > > - struct xenbus_device *dev = to_xenbus_device(_dev); \ > > - struct backend_info *be = dev->dev.driver_data; \ > > + ssize_t ret = -ENODEV; \ > > + struct xenbus_device *dev; \ > > + struct backend_info *be; \ > > > \ > > - return sprintf(buf, format, ##args); \ > > + if (!get_device(_dev)) \ > > + return ret; \ > > + dev = to_xenbus_device(_dev); \ > > I think this deadlocks here: > > + read_lock(&sysfs_read_lock); \ > > + if ((be = dev->dev.driver_data) != NULL) \ > > + ret = sprintf(buf, format, ##args); \ > > + read_unlock(&sysfs_read_lock); \ > > + put_device(_dev); \ > > + return ret; \ > > } \ > > static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL) > > > > @@ -170,6 +180,7 @@ > > { > > struct backend_info *be = dev->dev.driver_data; > > > > + write_lock(&sysfs_read_lock); > > if (be->group_added) > > .. and here: > > xentap_sysfs_delif(be->dev); > > if (be->backend_watch.node) { > > @@ -187,6 +198,7 @@ > > } > > kfree(be); > > dev->dev.driver_data = NULL; > > + write_unlock(&sysfs_read_lock); > > return 0; > > } > > Sysfs may refcount these nodes, blocking xentap_sysfs_delif(). Would you like to give me some info about of blocking and deadlocks? Checked xentap_sysfs_delif() -> sysfs_remove_group() and I did not found blocking codes, maybe I lost something? Thanks, Joe _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |