[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] HPET stack overflow, and general problems with do_IRQ()


I have finally managed to get a full stack dump from affected hardware.

The logs can be found here (including hypervisor with debugging symbols):


The interesting log file is xen.pcpu0.stack.log

By my count (grepping for e008 as CS), there are are 8 exception frames
on the Xen stack (all stack page 6)

However, because of the early ack() at the LAPIC, and disabling of
interrupts, the vectors (in order of interrupts arriving) are

c1, 99, b1, b9, a9, a1, 91, 89

These 8 interrupts take a little more than half the available stack,
while the bottom half of the stack seems be a vmentry which failed
because of a hap pagefault.

One "solution" to the problem would be to extend the Xen
PRIMARY_STACK_SIZE to 3 pages rather than 2, but is hardly a good thing
to do.

I think that the fundamental problem is the early ack and re-enabling of
interrupts.  We have servers where 150 VMs using PCIpassthrough are
starting to run out of available entries in the IDTs.  While unlikely,
it would be possible to encounter a situation with 40 nested interrupts,
at which point there is a real danger of trashing the compat sysenter
trampoline, located at the base of stack page 3, and just a few more
before MCEs and NMIs will end up walking over the main stack.

While I hate to suggest this, the only sensible solution without edge
cases is to never enable interrupts in do_IRQ().  I suppose a slightly
less extreme solution could be to promote the TPR to 0xe0 and re-enable
interrupts, so high-priority processing can still occur?


Unfortunately, I am out of the office now until Monday 26th, with
limited access to internet during that time (Although I will still be
with internet tomorrow morning).  I will check emails when I can, but I
don't expect to be able to make timely contributions to the above
discussion during this time.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.