[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] Add SUPPORT.md

On 09/07/2017 02:57 PM, Roger Pau Monné wrote:
> On Thu, Sep 07, 2017 at 11:49:00AM +0100, George Dunlap wrote:
>> On 08/31/2017 12:25 PM, Roger Pau Monne wrote:
>>> On Thu, Aug 31, 2017 at 11:27:19AM +0100, George Dunlap wrote:


>>>> +### x86/PVH dom0
>>>               ^ v2
>>>> +
>>>> +    Status: Experimental
>>> The status of this is just "not finished". We need at least the PCI
>>> emulation series for having a half-functional PVHv2 Dom0.
>> From the definition of 'Experimental':
>>     Functional completeness: No
>>     Functional stability: Here be dragons
>>     Interface stability: Not stable
>>     Security supported: No
>> "Not finished" -> Functional completeness: No -> Experimental.
>> If there's no way of doing anything with dom0 at all we should probably
>> just remove it from the list.
> Right now it should be removed according to the logic above.

Fair enough.

>>>> +### Online resize of virtual disks
>>>> +
>>>> +    Status: Supported
>>> That pretty much depends on where you are actually storing your disks
>>> I guess. I'm not sure we want to make such compromises.
>> What do you mean?
> I'm not sure online resizing is something that needs spelling out
> separately, it's part of how the block protocol works, just like
> indirect descriptors or persistent grants (which are also not listed
> here, and I think that's fine).
>>>> +### x86/HVM iPXE
>>>> +
>>>> +    Status: Supported, with caveats
>>>> +
>>>> +Booting a guest via PXE.
>>>> +PXE inherently places full trust of the guest in the network,
>>>> +and so should only be used
>>>> +when the guest network is under the same administrative control
>>>> +as the guest itself.
>>> Hm, not sure why this needs to be spelled out, it's just like running
>>> any bootloader/firmware inside a HVM guest, which I'm quite sure we
>>> are not going to list here.
>>> Ie: I don't see us listing OVMF, SeaBIOS or ROMBIOS, simply because
>>> they run inside the guest, so if they are able to cause security
>>> issues, anything else is also capable of causing them.
>> Well iPXE is a feature, so we have to say something about it; and there
>> was a long discussion at the Summit about whether we should list iPXE as
>> "security supported", because *by design* it just runs random code that
>> someone sends it over the network.  But if we say it's not supported, it
>> makes it sound like we think you shouldn't use it.
>> Above was the agreed-upon compromise: to say it was supported but warn
>> people what "supported" means.
> Hm, I'm still not sure this should be explicitly listed here.
> Running random code inside of a guest is not a problem from Xen's PoV,
> and we would never issue a XSA, unless such code is able to break
> outside of the guest, in which case it doesn't matter whether the code
> has been randomly fetched from the network.

That's not true at all.  There are two security boundaries within the
guest: user space -> kernel space, and outside -> [anything].  If there
was a bug in the QEMU network card which allowed a crafted packet to
write into unauthorized memory in the guest, that would be a security
vulnerability, as would a bug which allowed a guest user to make
unauthorized changes to the guest kernel memory (or unauthorized access
to guest devices, &c).

> IMHO iPXE is just like any other firmware that Xen supports, such as
> OVMF/SeaBIOS/ROMBIOS, and I don't see them listed here. I'm not sure
> in which way iPXE is special from the other ones that requires such an
> entry in the support document.

Well first of all, the purpose of this document is in fact in part to
enumerate features, not only to talk about security support.  As such,
we probably should list OVMF / BIOS booting as features. :-)

Secondly, I agree that it's hard to imagine what a genuine security bug
in one of these things (iPXE, BIOS, OVMF) would look like.  However, *if
we did find* a bug in one of those pieces of code that allowed an entity
(either guest user or someone not in the guest at all) to make
unauthorized changes, we would definitely need to issue an XSA for it.
That is what "security supported" means.


> From a Xen PoV, they are just HVM OSes.  Some of them make use of more
> PV interfaces than others, but all HVM guests have the same set of
> interfaces available to them, and thus the same surface of attack.

Similar to the argument above: XSAs will not only be issued for
violations of the guest -> Xen privilege boundary, but also for the
guest user -> guest kernel privilege boundary.  If one of the
PVHVM-related features has a bug such that it allows a user -> kernel
escalation, we will issue an XSA for it.

> I don't think it makes sense to list PVHVM as a guest type in the list
> of supported features.

Maybe we should group these as "PVHVM acceleration" and put them in a
different section, rather than calling them a separate guest type.


>>>> +### ARM/SMMU
>>>> +
>>>> +    Status: Supported, with caveats
>>>> +
>>>> +Only ARM SMMU hardware is supported; non-ARM SMMU hardware is not 
>>>> supported.
>>> I'm not sure of the purpose of this sentence, it's quite clear that
>>> the SMMU is only supported if available. Also, I'm not sure this
>>> should be spelled out in this document, x86 doesn't have a VT-d or SVM
>>> section.
>> This sentence means, "An SMMU designed by ARM", as opposed to an SMMU
>> (or SMMU-like thing) designed by someone other than ARM.  (And yes, I
>> understand that such things existed before the ARM SMMU came out.)
>> I think people running ARM systems will understand what the sentence means.
> Oh, thanks. I didn't know there was such a difference in the ARM
> world.

Yeah, I think in the x86 world Intel basically both designs and makes
all the chips necessary to build a system; in the historical ARM
ecosystem, a lot of the necessary "motherboard" chips have been made or
designed by the people building the embedded system.  So it was more
natural, before ARM had its own SMMU design, for a vendor to step up and
design / build one of their own.

But apparently in this case that one wasn't very good, and was quickly
superseded by the ARM one.  But since it exists, we have to clarify that
it's only the ARM-designed one which is supported.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.