[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC v2] x86/emul: Fix the handling of unimplemented Grp7 instructions
On 05/09/2017 07:57, Jan Beulich wrote: >>>> Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 09/04/17 7:35 PM >>> >> Grp7 is abnormally complicated to decode, even by x86's standards, with >> {s,l}msw being the problematic cases. >> >> Previously, any value which fell through the first switch statement (looking >> for instructions with entirely implicit operands) would be interpreted by the >> second switch statement (handling instructions with memory operands). >> >> Unimplemented instructions would then hit the #UD case for having a >> non-memory >> operand, rather than taking the cannot_emulate path. >> >> Place a big if/else around the two switch statements (accounting for {s,l}msw >> which need handling in the else clause), so both switch statments can have a >> default goto cannot_emulate path. >> >> This fixes the emulation of xend, which would hit the #UD path when it should >> complete with no side effects. > This could be had with a single line change. And while I can see this mistake > of mine alone to be justification for the restructuring, it's still rather > big a change > due to all the re-indentation. Did you instead consider simply combining the > two switch() statements (retaining present indentation), by using range case > labels for the opcodes permitting operands? That was my first idea, but the cases are not adjacent. You need 3 ranges for the mod != 11 instructions, and 4 for {s,l}msw, and there was no clean way I could find to express that. > That would have the added benefit > of no longer producing #UD for things like VMCALL, but instead having those > go to cannot_emulate too. This is the behaviour the patch is intended to introduce. What's broken with the logic? ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |