[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] xend segfaults when starting
On Wed, 11 Aug 2010, Ian Campbell wrote: > On Wed, 2010-08-04 at 15:55 +0100, Stefano Stabellini wrote: > > > > On Wed, 2010-08-04 at 14:12 +0100, Christoph Egger wrote: > > > > > Hi! > > > > > > > > > > xend causes python to segfault on startup. > > > > > The changeset in error is: 21904:6a0dd2c29999 > > > > > > > It doesn't, in fact: > > > > changeset: 21907:6a0dd2c29999 > > parent: 21904:9f49667fec71 > > user: Ian Campbell <ian.campbell@xxxxxxxxxx> > > date: Fri Jul 30 16:20:48 2010 +0100 > > summary: libxc: free thread specific hypercall buffer on > > xc_interface_close > > > > I am going to revert this and leave it to Ian to fix it properly > > (currently on vacation). > > I'm currently looking at this but I'm not seeing this issue, xend starts > up fine and I can start a (PV) VM. > > When you said "segfault on startup" did you mean of xend or of a domain? > (I think the former). > > Can you give me a little more information about your environment please? > Is it NetBSD by any chance? > > Please could you reapply this changeset add some tracing to > hcall_buf_prep and _xc_clean_hcall_buf to print out the hcall_buff and > hcall_buff->buf as they are allocated and freed. The line numbers > indicate that the free(hcall_buf->buf) is faulting. We've just called > unlock_pages on the same address but since we seem to deliberately throw > away any errors from munlock (see "safe_munlock") that doesn't really > tell us much about its validity. > > Perhaps this whole area needs looking at with an eye to NetBSD > portability? > I gave few days to Christoph to reply, I'll reapply the patch for now but if Christoph can come up with a good explanation of the problem I'll revert it again or fix the bug. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |