[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Ping: Re: [PATCH] x86/HVM: correctly deal with benign exceptions when combining two



Despite all the other desirable adjustments I think the proposed change
is worthwhile in its own right. I certainly don't mean to extend the scope
of the change, and feedback so far hasn't really pointed out anything
that needs to change _within_ its scope.

Jan

>>> On 06.02.19 at 11:54,  wrote:
>>>> On 06.02.19 at 10:57, <andrew.cooper3@xxxxxxxxxx> wrote:
> > On 06/02/2019 07:40, Jan Beulich wrote:
> >> Benign exceptions, no matter whether they're first or second, will never
> >> cause #DF (a benign exception being second can basically only be #AC, as
> >> in the XSA-156 scenario).
> > 
> > #DB can happen as well, but I'm not sure if this example is relevant
> > here.  Both of those loops where repeated exceptions caused by trying to
> > deliver the first exception, rather than an issue of priority amongst
> > simultaneous exceptions during the execution of an instruction.
> 
> No, I don't think #DB falls into this category (I had it here before
> and then removed it): If a data breakpoint hits while delivering an
> exception or interrupt, that - being a trap - will be delivered _after_
> the original event, i.e. on the first instruction of the other handler.
> That's also the way the XSA-156 advisory describes it.
> 
> > For VT-x, we're supposed to fill in the PENDING_DBG control rather than
> > raise #DB directly, explicitly to allow the priority of interrupts to be
> > expressed correctly.  There is a very large quantity of untangling
> > working to do, and very clear I'm not going to have time to fix even the
> > STI/Singlestep issue in the 4.12 timeframe.
> 
> Are you saying there need to be any vendor specific adjustments
> to this function then? I would very much hope that the code here
> could remain vendor independent, with vendor specific adjustments
> done in vendor specific code, and independently of the proposed
> change here.
> 
> >> Sadly neither AMD nor Intel really define what happens with two benign
> >> exceptions - the term "sequentially" used by both is poisoned by how the
> >> combining of benign and non-benign exceptions is described. Since NMI,
> >> #MC, and hardware interrupts are all benign and (perhaps with the
> >> exception of #MC) can't occur second, favor the first in order to not
> >> lose it.
> > 
> > #MC has the highest priority so should only be recognised immediately
> > after an instruction boundary.
> 
> Are you sure? What about an issue with one of the memory
> accesses involved in delivering a previously raised exception?
> 
> >  The interesting subset is then a #DB
> > from task switch, then NMI, then #DB from other pending traps from the
> > previous instruction, so I think it is quite possible for us to end up
> > with a #DB stacked on top of the head of the NMI/#MC handler, if we
> > follow a sequential model.  (Lucky for XSA-260 then, if this case
> > actually exists.)
> 
> But for that we'd first of all need callers of this function to
> record the fact that their exception was squashed. Plus,
> as per above, such a #DB is to be delivered after the one
> that caused it to be raised in the first place, so is not subject
> to the behavior of hvm_combine_hw_exceptions() itself, and
> hence beyond the scope of this patch.
> 
> > I don't however see a way of stacking #AC, because you can't know that
> > one has occured until later in the instruction cycle than all other
> > sources.  What would happen is that you'd raise #AC from previous
> > instruction, and then recognise #MC while starting to execute the #AC
> > entry point.  (I think)
> 
> Well - see XSA-156 for what hardware does in that situation.
> Besides eliminating that security issue, I don't think this is a
> very important case to deal with correctly, unless you're aware
> of OSes which allow handling #AC in ring 3.
> 
> Irrespective of all of the above - what am I to take from your
> response? I.e. what adjustments (within the scope of this patch)
> do you see necessary for the change to become acceptable? It
> was my thinking that this change alone would have masked the
> original #DF issue you've run into, so would likely be worthwhile
> without any of the other related work you hint at.
> 
> Jan
> 
> 




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.