[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Re : blkdev_issue_discard hung [solved ?]
It seems to be a timeout problem. I can reproduce with a LVM device and block throttling : # ls -ld /dev/dm-3 brw-rw---- 1 root disk 253, 3 sept. 26 12:17 /dev/dm-3 # echo '253:3 500' > /sys/fs/cgroup/blkio/blkio.throttle.read_iops_device # echo '253:3 500' > /sys/fs/cgroup/blkio/blkio.throttle.write_iops_device # echo '253:3 4194304' > /sys/fs/cgroup/blkio/blkio.throttle.write_bps_device # echo '253:3 4194304' > /sys/fs/cgroup/blkio/blkio.throttle.read_bps_device So it's probably just a timeout problem. I tried with a 4.13.3 kernel for Dom0, with last driver version for xen-blkback, and it doesn't have the problem. Good ! Le lundi 25 septembre 2017 à 22:47 +0200, Olivier Bonvalet a écrit : > Hi, > > I have a problem with Xen 4.8.2 (or Linux 4.9.51 as dom0 kernel) : > calls to DISCARD command from the DomU frequently hang. > I also have the problem with Xen 4.8.1 and Linux 4.9 from Debian 9 > (stretch) repositories. > > Here logs from dom0 : > > Sep 25 22:32:44 hyp04-sbg kernel: [ 7468.540610] INFO: task 1.xvdo- > 0:2740 blocked for more than 120 seconds. > Sep 25 22:32:44 hyp04-sbg kernel: [ 7468.540667] Not tainted > 4.9-dae-dom0 #1 > Sep 25 22:32:44 hyp04-sbg kernel: [ 7468.540699] "echo 0 > > /proc/sys/kernel/hung_task_timeout_secs" disables this message. > Sep 25 22:32:44 hyp04-sbg kernel: [ 7468.540752] 1.xvdo- > 0 D 0 2740 2 0x00000000 > Sep 25 22:32:44 hyp04-sbg kernel: [ 7468.540758] ffff88027c6ac080 > 0000000000000000 ffff880285617e00 ffff88027ff4a280 > Sep 25 22:32:44 hyp04-sbg kernel: [ 7468.540763] ffff88027c762fc0 > ffffc90011cdfbc0 ffffffff8153c720 ffff880279e40600 > Sep 25 22:32:44 hyp04-sbg kernel: [ 7468.540767] ffff88027c762fc0 > 7fffffffffffffff ffff88027c762fc0 00000000007fe000 > Sep 25 22:32:44 hyp04-sbg kernel: [ 7468.540770] Call Trace: > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540783] [<ffffffff8153c720>] ? __schedule+0x190/0x570 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540786] [<ffffffff8153cb2d>] ? schedule+0x2d/0x80 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540793] [<ffffffff8153fbce>] ? schedule_timeout+0x17e/0x2a0 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540799] [<ffffffff8125a5c5>] ? > generic_make_request_checks+0x145/0x430 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540804] [<ffffffff810c2cf6>] ? ktime_get+0x36/0xa0 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540808] [<ffffffff8153c528>] ? io_schedule_timeout+0x98/0x100 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540811] [<ffffffff8153ddb4>] ? > wait_for_completion_io+0xa4/0x120 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540815] [<ffffffff81088040>] ? wake_up_q+0x70/0x70 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540823] [<ffffffff81255ca7>] ? submit_bio_wait+0x47/0x60 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540826] [<ffffffff81263fa5>] ? 0x65/0xa0 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540837] [<ffffffffa02d9e3e>] ? __do_block_io_op+0x3be/0x610 > [xen_blkback] > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540841] [<ffffffffa02da521>] ? xen_blkif_schedule+0x101/0x6f0 > [xen_blkback] > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540847] [<ffffffff8109ca10>] ? wake_atomic_t_function+0x50/0x50 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540850] [<ffffffffa02da420>] ? xen_blkif_be_int+0x30/0x30 > [xen_blkback] > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540854] [<ffffffff81080a52>] ? kthread+0xc2/0xe0 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540856] [<ffffffff81080990>] ? kthread_create_on_node+0x40/0x40 > Sep 25 22:32:44 hyp04-sbg kernel: [ > 7468.540859] [<ffffffff81540e05>] ? ret_from_fork+0x25/0x30 > > Any ideas about what's happening, and how to fix that ? > > Thanks, > > Olivier > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > https://lists.xen.org/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |