[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 13/15] xen: make grant resource limits per domain



>>> On 20.09.17 at 14:44, <jgross@xxxxxxxx> wrote:
> On 20/09/17 13:48, Jan Beulich wrote:
>>>>> On 20.09.17 at 13:10, <jgross@xxxxxxxx> wrote:
>>> On 20/09/17 12:34, Jan Beulich wrote:
>>>>>>> On 20.09.17 at 08:34, <jgross@xxxxxxxx> wrote:
>>>>> --- a/xen/common/compat/grant_table.c
>>>>> +++ b/xen/common/compat/grant_table.c
>>>>> @@ -1777,13 +1784,15 @@ active_alloc_failed:
>>>>>  
>>>>>  static long
>>>>>  gnttab_setup_table(
>>>>> -    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count)
>>>>> +    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count,
>>>>> +    unsigned int limit_max)
>>>>>  {
>>>>>      struct vcpu *curr = current;
>>>>>      struct gnttab_setup_table op;
>>>>>      struct domain *d = NULL;
>>>>>      struct grant_table *gt;
>>>>>      unsigned int i;
>>>>> +    long ret = 0;
>>>>
>>>> Wouldn't int suffice here?
>>>
>>> I just followed the return type of the function. I think this way is
>>> cleaner, but in case you like int better I can change it.
>> 
>> I sort of expected this reply, but that's not in line with what you
>> did in gnttab_get_status_frames() then. I think we will want to
>> eventually change all function return types to int where the wider
>> type isn't needed.
> 
> Okay. Should I include a patch doing that in this series or would you
> prefer this cleanup being delayed to 4.11?

This should be delayed imo - we're past the date where new
non-bug-fix patches should be accepted.

>>>> Additionally to take the input values without applying some
>>>> upper cap - while we have XSA-77, we still shouldn't introduce
>>>> new issues making disaggregation more unsafe. Perhaps the
>>>> global limits could serve as a cap here?
> 
> Thinking more about it: what can happen in worst case when no cap
> is being enforced?
> 
> A domain with the privilege to create another domain with arbitrary
> amounts of memory (we have no cap here) might go crazy and give the
> created domain an arbitrary amount of grant frames or maptrack
> frames. So what is the difference whether the memory is spent directly
> for the domain or via grant frames? In both cases there will be no
> memory left for other purposes. I can't see how this would be worse
> than allocating the same amount of memory directly for the new domain.

Oh, memory exhaustion wasn't my primary worry, as that's at
least immediately visible. I was rather thinking about long lasting
loop or misbehavior because of arithmetic overflowing somewhere.

>>> I thought about a cap and TBH I'm not sure which would be sane to
>>> apply. The global limits seem wrong, especially looking at patch 14:
>>> those limits will be for dom0 only then. And dom0 won't need many
>>> grant frames in the normal case...
>> 
>> I've been thinking about this Dom0 aspect too over lunch. What
>> about allowing the hardware domain to set its limit (only upwards
>> of course) in setup_table(), without any upper bound enforced?
>> This would free up the globals to be used as system wide limits
>> again.
> 
> This would be possible, of course.
> 
> The question is whether the need to re-allocate the frame pointer arrays
> is it worth.

Input by others would be helpful...

>>> So I could make up a cap in form of either a configurable constant
>>> (CONFIG_* or boot parameter?) or as a fraction of domain memory. Any
>>> preferences here?
>> 
>> A config constant as well as a fraction of domain memory might lock
>> out special purpose guests. Which would leave command line options
>> - as per above perhaps the ones we already have.
> 
> In case we really need the cap parameters I'd prefer distinct ones from
> the dom0 initial values.

I agree - reusing what we have would be an option primarily
when we don't need anything for Dom0. But if we need two
sets of options, perhaps keeping the current ones for global
limits and introducing new sub-options to "dom0=" would
perhaps be better.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.