[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-API] XCP 1.0-beta: md on dom0 kernel causes kernel oops
Hi all, Apologies - I've been off on paternity/holiday so not been chasing this. There's a little confusion here - unfortunately when pushing the beta, I forgot to update the mercurial repositories. I have now fixed this. This means that the link that was given by Tomoe was not the version of the patch that was in the beta. The current version is: http://xenbits.xen.org/XCP/linux-2.6.32.pq.hg?file/043b76e4943c/blktap2-ioc.diff This is the version of the patch that is causing the md driver to fail. Daniel, can you take another look to see if Tomoe's suggestion is still valid? Many thanks, Jon On 15 Dec 2010, at 10:58, Tomoe Sugihara wrote: > On 12/08/2010 07:53 AM, Tomoe Sugihara wrote: >> On 12/07/2010 11:51 PM, Daniel Stodden wrote: >>> On Mon, 2010-12-06 at 07:40 -0500, Tomoe Sugihara wrote: >>>> On 12/03/2010 08:03 AM, Tomoe Sugihara wrote: >>>>> On 12/02/2010 05:50 PM, Simon Rowe wrote: >>>>>> On Wednesday 01 Dec 2010 23:03:48 Tomoe Sugihara wrote: >>>>>> >>>>>>> I tried the following two and both ended up with Oops. >>>>>>> >>>>>>> 1. just two regular HDDs --- /dev/sdc and /dev/sde or whatever. >>>>>>> >>>>>>> Here's how I reproduced just now.Very soon after dd, kernel >>>>>>> panics. >>>>>>> >>>>>>> $ mdadm --create /dev/md0 --level=mirror --raid-devices=2 >>>>>>> /dev/sdc >>>>>>> /dev/sde mdadm: /dev/sdc appears to be part of a raid array: >>>>>>> level=raid1 devices=2 ctime=Mon Nov 29 15:52:53 2010 >>>>>>> mdadm: /dev/sde appears to be part of a raid array: >>>>>>> level=raid1 devices=2 ctime=Mon Nov 29 09:34:34 2010 >>>>>>> Continue creating array? y >>>>>>> mdadm: array /dev/md0 started. >>>>>>> $ dd if=/dev/zero of=/dev/md0 bs=1M >>>>>>> >>>>>>> >>>>>>> >>>>>>> 2. dm linear devices which are on top of iscsi >>>>>>> >>>>>>> mdadm --create /dev/mirror/000008bc --auto=md --bitmap=internal >>>>>>> --metadata=0 --run --bitmap-chunk=65536 --delay=5 --level=1 >>>>>>> --assume-clean --raid-devices=3 /dev/mapper/00000001-00000003 >>>>>>> /dev/mapper/00000002-000002d7 /dev/mapper/00000003-0000050a >>>>>>> >>>>>>> Interestingly, his doesn't cause kernel panic; similar stack >>>>>>> trace >>>>>>> shows up in dmsg and mdadm stalls. >>>>>> >>>>>> We don't support (and therefore test) software RAID in XenServer. I'll >>>>>> some of >>>>>> our kernel guys if there's anything obvious. >>>>>> >>>>>> Simon >>>>>> >>>>> >>>>> Could you see the email that I reported the issue? Here's a link to the >>>>> post: >>>>> http://www.mail-archive.com/xen-api@xxxxxxxxxxxxxxxxxxx/msg02151.html >>>>> >>>>> As I said, the issue is introduced by dom0 custom patch, which I put a >>>>> link to the repository below, >>>>> therefor it is not there in vanilla kernel. >>>>> >>>>> http://xenbits.xen.org/XCP/linux-2.6.32.pq.hg?diff/2d68a42120cf/blktap2-ioc.diff >>>>> >>>>> I would really want it to be addressed not only for our SM backend plugin >>>>> but also for the sake >>>>> of removing the vulnerability. >>>> >>>> Hi, >>>> >>>> We would really like this to be addressed so we can test >>>> XCP 1.0 beta with our SM backend driver. >>>> >>>> I found the author and the log for the patch that introduces the oops from >>>> the following link. >>>> http://xenbits.xen.org/XCP/linux-2.6.32.pq.hg?annotate/2d68a42120cf/blktap2-ioc.diff >>>> >>>> I didn't get what it is for, but I hope the author may be able to help >>>> this fixed. >>> >>> Yeah, looks like this queue got tagged for XCP on a quite unfortunate >>> edge, indeed. The patch series has long been fixed since then, but >>> merged a little further on trunk, so I can't just offer you a >>> replacement for the -ioc diff. >> >> >> Thanks for following this up, Daniel. >> I'll wait for a fixed kernel. > > Hi, > I'm still looking forward to a fixed kernel. > >>> One could replace the entire blktap2-series, or maybe there's a bigger >>> update already in the make (?). > > Any update on this? I hope someone is working on the issue. > > Daniel, > Thanks for sending me a set of patches off line. > Sorry, I haven't had a chance to try, though. > > Best, > Tomoe _______________________________________________ xen-api mailing list xen-api@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/mailman/listinfo/xen-api
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |