[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/boot: Clean up the trampoline transition into Long mode


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 3 Jan 2020 18:55:26 +0000
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none; spf=None smtp.pra=andrew.cooper3@xxxxxxxxxx; spf=Pass smtp.mailfrom=Andrew.Cooper3@xxxxxxxxxx; spf=None smtp.helo=postmaster@xxxxxxxxxxxxxxx
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= mQINBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABtClBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPokCOgQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86LkCDQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAYkC HwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Fri, 03 Jan 2020 18:56:01 +0000
  • Ironport-sdr: hrMKimTtiJFo+E5mu11xahA2UnOI+yET995utKkar+D0w8byl/uJ/4mTfpVuaMdKVWqJWtW6PK VRwnsYM4BVEO3iS2M4JaY+3MgwNOOWW4x8IHefj1d/0cuJ3ilhfHK/JDD9p6G69d+v+AmxtHRz nzg0N/4DME9OpdoGlD1MUjTvFJMDuKjjb4LlNqMGeHGbP4cX5PlcXRqAHFOqJt9B7yydcBGIl/ r663lXPEGH5G1YXTB/TNUbcavORD60ZOiufGXYj45QylzMY323tkwgWf87RsUUgfjV/J1/E7Hh xZk=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 03/01/2020 14:34, Jan Beulich wrote:
> On 03.01.2020 15:25, Andrew Cooper wrote:
>> On 03/01/2020 13:52, Jan Beulich wrote:
>>> On 03.01.2020 14:44, Andrew Cooper wrote:
>>>> On 03/01/2020 13:36, Jan Beulich wrote:
>>>>> On 02.01.2020 15:59, Andrew Cooper wrote:
>>>>>> @@ -111,26 +109,6 @@ trampoline_protmode_entry:
>>>>>>  start64:
>>>>>>          /* Jump to high mappings. */
>>>>>>          movabs  $__high_start, %rdi
>>>>>> -
>>>>>> -#ifdef CONFIG_INDIRECT_THUNK
>>>>>> -        /*
>>>>>> -         * If booting virtualised, or hot-onlining a CPU, sibling 
>>>>>> threads can
>>>>>> -         * attempt Branch Target Injection against this jmp.
>>>>>> -         *
>>>>>> -         * We've got no usable stack so can't use a RETPOLINE thunk, 
>>>>>> and are
>>>>>> -         * further than disp32 from the high mappings so couldn't use
>>>>>> -         * JUMP_THUNK even if it was a non-RETPOLINE thunk.  
>>>>>> Furthermore, an
>>>>>> -         * LFENCE isn't necessarily safe to use at this point.
>>>>>> -         *
>>>>>> -         * As this isn't a hotpath, use a fully serialising event to 
>>>>>> reduce
>>>>>> -         * the speculation window as much as possible.  %ebx needs 
>>>>>> preserving
>>>>>> -         * for __high_start.
>>>>>> -         */
>>>>>> -        mov     %ebx, %esi
>>>>>> -        cpuid
>>>>>> -        mov     %esi, %ebx
>>>>>> -#endif
>>>>>> -
>>>>>>          jmpq    *%rdi
>>>>> I can see this being unneeded when running virtualized, as you said
>>>>> in reply to Wei. However, for hot-onlining (when other CPUs may run
>>>>> random vCPU-s) I don't see how this can safely be dropped. There's
>>>>> no similar concern for S3 resume, as thaw_domains() happens only
>>>>> after enable_nonboot_cpus().
>>>> I covered that in the same reply.  Any guest which can use branch target
>>>> injection against this jmp can also poison the regular branch predictor
>>>> and get at data that way.
>>> Aren't you implying then that retpolines could also be dropped?
>> No.  It is a simple risk vs complexity tradeoff.
>>
>> Guests running on a sibling *can already* attack this branch with BTI,
>> because CPUID isn't a fix to bad BTB speculation, and the leakage gadget
>> need only be a single instruction.
>>
>> Such a guest can also attack Xen in general with Spectre v1.
>>
>> As I said - this was introduced because of paranoia, back while the few
>> people who knew about the issues (only several hundred at the time) were
>> attempting to figure out what exactly a speculative attack looked like,
>> and was applying duct tape to everything suspicious because we had 0
>> time to rewrite several core pieces of system handling.
> Well, okay then:
> Acked-by: Jan Beulich <jbeulich@xxxxxxxx>

Thanks.  I've adjusted the commit message in light of this conversation.

>
>>>> Once again, we get to CPU Hotplug being an unused feature in practice,
>>>> which is completely evident now with Intel MCE behaviour.
>>> What does Intel's MCE behavior have to do with whether CPU hotplug
>>> (or hot-onlining) is (un)used in practice?
>> The logical consequence of hotplug breaking MCEs.
>>
>> If hotplug had been used in practice, the MCE behaviour would have come
>> to light much sooner, when MCEs didn't work in practice.
>>
>> Given that MCEs really did work in practice even before the L1TF days,
>> hotplug wasn't in common-enough use for anyone to notice the MCE behaviour.
> Or systems where CPU hotplug was actually used on were of good
> enough quality to never surface #MC

Suffice it to say that there is plenty of evidence to the contrary here.

Without going into details for obvious reasons, there have been number
of #MC conditions (both preexisting, and regressions) in recent
microcode discovered in the field because everyone is needing to
proactively take microcode updates these days.

> (personally I don't think
> I've seen more than a handful of non-reproducible #MC instances)?

You don't run a "cloud scale" number of systems.

Even XenServers test system of a few hundred systems sees a concerning
(but ultimately, background noise) rate of #MC's, some of which are
definite hardware failures (and kept around for error testing purposes),
and others are in need of investigation.

> Or people having run into the bad behavior simply didn't have the
> resources to investigate why their system shut down silently
> (perhaps giving entirely random appearance of the behavior)?

Customers don't tolerate their hosts randomly crashing, especially if it
happens consistently.

Yes - technically speaking all of these are options, but the balance of
probability is vastly on the side of CPU hot-plug not actually being
used at any scale in practice.  (Not least because there are still
interrupt handling bugs present in Xen's implementation.)

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.