[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] NMI deferral on i386
>>> Keir Fraser <keir@xxxxxxxxxxxxx> 21.05.07 16:17 >>> >On 21/5/07 15:01, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote: > >> I think I found a pretty reasonable solution, which I'm attaching in its >> current >> (3.1-based) form, together with the prior (untested) variant that copied the >> NMI behavior. Even if it doesn't apply to -unstable, I'd be glad if you could >> have a brief look to see whether you consider the approach too intrusive (in >> which case there would be no point in trying to bring it forward to >> -unstable). > >You'll have to explain how the changes to the x86_32 entry.S work. The rest >of the patch looks good. The idea is to always check values read from %ds and %es against __HYPERVISOR_DS, and only store into the current frame (all normal handlers) or the outer-most one (NMI and MCE) if the value read is different. That way, any NMI or MCE occurring during frame setup will store selectors not saved so far on behalf of the interrupted handler, with that interrupted handler either having managed to read the guest selector (in which case it can store it regardless of whether NMI/MCE kicked in between the read and the store) or finding __HYPERVISOR_DS already in the register, in which case it'll know not to store (as the nested handler would have done the store). For the restore portion this makes use of the fact that there's exactly one such code sequence, and by moving the selector restore part past all other restores (including all stack pointer adjustments) the NMI/MCE handlers can safely detect whether any selector would have been restored already (by range checking EIP) and move EIP back to the beginning of the selector restore sequence without having to play with the stack pointer itself or any other gpr. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |