[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] gnttab: defer allocation of status frame tracking array

On 04/01/2021 15:22, Jan Beulich wrote:
> On 04.01.2021 16:04, Andrew Cooper wrote:
>> On 23/12/2020 15:13, Jan Beulich wrote:
>>> This array can be large when many grant frames are permitted; avoid
>>> allocating it when it's not going to be used anyway, by doing this only
>>> in gnttab_populate_status_frames().
>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>>> ---
>>> v2: Defer allocation to when a domain actually switches to the v2 grant
>>>     API.
>> I see this as a backwards step.  It turns a build-time -ENOMEM into a
>> runtime -ENOMEM, and if you recall from one of the XSAs, the Windows PV
>> drivers will BUG() if set_version fails.  (Yes - this is dumb behaviour,
>> but it is in the field now.)
> Well, if this was the only source of -ENOMEM (i.e. none was there
> previously), I'd surely understand the concern. But there have been
> memory allocations before on this path.

... you're literally writing a patch saying "avoid large allocation at
domain create time, and make it at runtime instead" and then trying to
argue that it isn't a concern because there are other memory allocations.

It is very definitely a backwards step irrespective of the size of the
allocation, even if the current behaviour isn't necessarily perfect.

>  In any event, this will
> need settling between you and Julien, as it was him to request the
> change.

Well - that's because gnttab v2 is disabled in general on ARM.

Conditionally avoiding the allocation because of opt_gnttab_max_version
being 1 would be ok, because it doesn't introduce new runtime failures
for guests. 

The correctness of this change does depend on opt_gnttab_max_version
being invariant for the lifetime of a domain.  If it were to become a
runtime parameter, it would need caching per domain, (which is frankly
how it should have been implemented all along, along with a parameter in




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.