|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 3/8] xen: delay allocation of grant table sub structures
On 08/09/17 17:28, Jan Beulich wrote:
>>>> On 08.09.17 at 08:56, <jgross@xxxxxxxx> wrote:
>> Delay the allocation of the grant table sub structures in order to
>> allow modifying parameters needed for sizing of these structures at a
>> per domain basis. Either do it from gnttab_setup_table() or just
>> before the domain is started the first time.
>
> The reference to gnttab_setup_table() is now sort of stale.
Hmm, yes.
>
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -363,6 +363,9 @@ struct domain *domain_create(domid_t domid, unsigned int
>> domcr_flags,
>> goto fail;
>> init_status |= INIT_gnttab;
>>
>> + if ( domid == 0 && grant_table_init(d) )
>> + goto fail;
>
> Besides not really liking the special case, why can't
> grant_table_create() make this call, keeping more grant table
> logic within grant_table.c?
Right, this is better.
> And if you special case Dom0,
> wouldn't it be better to (also) special case the hardware domain
> (in case that's not Dom0)?
As a hardware domain not being dom0 would need to be created via the
appropriate domctl hypercalls I don't see why the measures for all
other domains wouldn't be enough for that case.
>
>> @@ -1029,8 +1033,17 @@ int domain_unpause_by_systemcontroller(struct domain
>> *d)
>> * Creation is considered finished when the controller reference count
>> * first drops to 0.
>> */
>> - if ( new == 0 )
>> + if ( new == 0 && !d->creation_finished )
>> + {
>> + int ret = grant_table_init(d);
>> +
>> + if ( ret )
>> + {
>> + __domain_pause_by_systemcontroller(d, NULL);
>> + return ret;
>> + }
>> d->creation_finished = true;
>> + }
>
> Adding a grant table call here looks rather arbitrary, if not hackish.
Would it still be hackish if I'd add a generic function doing last
init calls just before the domain starts to run for the first time?
The call to grant_table_init() would be just the first such late init
calls for the domain.
> Why can't you call it from the domctl you're going to add in a later
> patch, requiring the tool stack to issue that domctl in all cases, just
> like e.g. a max_vcpus one is always necessary? That would also
> avoid a possibly confusing error (from the unpause, i.e. not
> obviously related to grant table setup failure). Of course that will
> require merging this patch with the other one to avoid an
> intermediate state in which the call wouldn't be made at all.
This would be another possibility, yes.
Instead of merging the patches I'd just move patches 6-8 before this one
to have everything in place, including the needed tools side.
>
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -1655,6 +1655,78 @@ gnttab_unpopulate_status_frames(struct domain *d,
>> struct grant_table *gt)
>> gt->nr_status_frames = 0;
>> }
>>
>> +int
>> +grant_table_init(struct domain *d)
>> +{
>> + struct grant_table *gt = d->grant_table;
>> + unsigned int i, j;
>> +
>> + if ( gt->nr_grant_frames )
>> + return 0;
>> +
>> + gt->nr_grant_frames = INITIAL_NR_GRANT_FRAMES;
>> +
>> + /* Active grant table. */
>> + if ( (gt->active = xzalloc_array(struct active_grant_entry *,
>> + max_nr_active_grant_frames)) == NULL )
>> + goto no_mem_1;
>> + for ( i = 0;
>> + i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
>> + {
>> + if ( (gt->active[i] = alloc_xenheap_page()) == NULL )
>> + goto no_mem_2;
>> + clear_page(gt->active[i]);
>> + for ( j = 0; j < ACGNT_PER_PAGE; j++ )
>> + spin_lock_init(>->active[i][j].lock);
>> + }
>> +
>> + /* Tracking of mapped foreign frames table */
>> + gt->maptrack = vzalloc(max_maptrack_frames * sizeof(*gt->maptrack));
>> + if ( gt->maptrack == NULL )
>> + goto no_mem_2;
>> +
>> + /* Shared grant table. */
>> + if ( (gt->shared_raw = xzalloc_array(void *, max_grant_frames)) == NULL
>> )
>> + goto no_mem_3;
>> + for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>> + {
>> + if ( (gt->shared_raw[i] = alloc_xenheap_page()) == NULL )
>> + goto no_mem_4;
>> + clear_page(gt->shared_raw[i]);
>> + }
>> +
>> + /* Status pages for grant table - for version 2 */
>> + gt->status = xzalloc_array(grant_status_t *,
>> + grant_to_status_frames(max_grant_frames));
>> + if ( gt->status == NULL )
>> + goto no_mem_4;
>> +
>> + for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>> + gnttab_create_shared_page(d, gt, i);
>> +
>> + gt->nr_status_frames = 0;
>> +
>> + return 0;
>> +
>> + no_mem_4:
>> + for ( i = 0; i < INITIAL_NR_GRANT_FRAMES; i++ )
>> + free_xenheap_page(gt->shared_raw[i]);
>> + xfree(gt->shared_raw);
>> + gt->shared_raw = NULL;
>> + no_mem_3:
>> + vfree(gt->maptrack);
>> + gt->maptrack = NULL;
>> + no_mem_2:
>> + for ( i = 0;
>> + i < num_act_frames_from_sha_frames(INITIAL_NR_GRANT_FRAMES); i++ )
>> + free_xenheap_page(gt->active[i]);
>> + xfree(gt->active);
>> + gt->active = NULL;
>> + no_mem_1:
>> + gt->nr_grant_frames = 0;
>> + return -ENOMEM;
>> +}
>
> The redundancy between this code and gnttab_grow_table() has
> always bothered me, and now would seem to be a good occasion
> to do away with it. Why don't you defer to gnttab_grow_table()
> anything that function already does (keeping the respective limits
> at zero in here)?
Just to be sure I understand you correctly: you want to get rid of
INITIAL_NR_GRANT_FRAMES and just grow from 0 on instead of starting at
the current value 4, right?
So I would just allocate the arrays (gt->active, gt->maptrack,
gt->shared_raw, gt->status) in grant_table_init().
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |