[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 16/16] SUPPORT.md: Add limits RFC



>>> On 22.11.17 at 19:01, <george.dunlap@xxxxxxxxxx> wrote:

> 
>> On Nov 21, 2017, at 9:26 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>
>>>>> On 13.11.17 at 16:41, <george.dunlap@xxxxxxxxxx> wrote:
>>> +### Virtual CPUs
>>> +
>>> +    Limit, x86 PV: 8192
>>> +    Limit-security, x86 PV: 32
>>> +    Limit, x86 HVM: 128
>>> +    Limit-security, x86 HVM: 32
>>
>> Personally I consider the "Limit-security" numbers too low here, but
>> I have no proof that higher numbers will work _in all cases_.
> 
> You don’t have to have conclusive proof that the numbers work in all
> cases; we only need to have reasonable evidence that higher numbers are
> generally reliable.  To use US legal terminology, it’s “preponderance of
> evidence” (usually used in civil trials) rather than “beyond a
> reasonable doubt” (used in criminal trials).
> 
> In this case, there are credible claims that using more vcpus opens
> users up to a host DoS, and no evidence (or arguments) to the contrary.
>  I think it would be irresponsible, under those circumstances, to tell
> people that they should provide more vcpus to untrusted guests.
> 
> It wouldn’t be too hard to gather further evidence.  If someone
> competent spent a few days trying to crash a larger guest and failed,
> then that would be reason to think that perhaps larger numbers were safe.
> 
>>
>>> +### Virtual RAM
>>> +
>>> +    Limit-security, x86 PV: 2047GiB
>>
>> I think this needs splitting for 64- and 32-bit (the latter can go up
>> to 168Gb only on hosts with no memory past the 168Gb boundary,
>> and up to 128Gb only on larger ones, without this being a processor
>> architecture limitation).
> 
> OK.  Below is an updated section.  It might be good to specify how large
> is "larger".

Well, simply anything with memory extending beyond the 168Gb
boundary, i.e. ...

> ---
> ### Virtual RAM
> 
>     Limit-security, x86 PV 64-bit: 2047GiB
>     Limit-security, x86 PV 32-bit: 168GiB (see below)
>     Limit-security, x86 HVM: 1.5TiB
>     Limit, ARM32: 16GiB
>     Limit, ARM64: 1TiB
> 
> Note that there are no theoretical limits to 64-bit PV or HVM guest sizes
> other than those determined by the processor architecture.
> 
> All 32-bit PV guest memory must be under 168GiB;
> this means the total memory for all 32-bit PV guests cannot exced 168GiB.
> On larger hosts, this limit is 128GiB.

... "On hosts with memory extending beyond 168GiB, this limit is
128GiB."

>>> +### Event Channel FIFO ABI
>>> +
>>> +    Limit: 131072
>>
>> Are we certain this is a security supportable limit? There is at least
>> one loop (in get_free_port()) which can potentially have this number
>> of iterations.
> 
> I have no idea.  Do you have another limit you’d like to propose instead?

Since I can't prove the given limit might be a problem, it's also
hard to suggest an alternative. Probably the limit is fine as is,
despite the number looking pretty big: In x86 PV page table
handling we're fine processing a single L2 in one go, which
involves twice as many iterations (otoh I'm struggling to find a
call tree where {alloc,free}_l2_table() would actually be called
with "preemptible" set to false).

> Also, I realized that I somehow failed to send out the 17th patch (!),
> which primarily had XXX entries for qemu-upstream/qemu-traditional, and
> host serial console support.
> 
> Shall I try to make a list of supported serial cards from
> /build/hg/xen.git/xen/drivers/char/Kconfig?

Hmm, interesting question. For the moment I'm having a hard time
seeing how someone using an arbitrary serial card, problems with
it could be caused by guest behavior. Other functionality problems
(read: bugs or missing code for unknown cards/quirks) aren't
security support relevant afaict.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.