[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m views configurable for x86



On Mon, Jun 10, 2024 at 9:30 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>
> On 09.06.2024 01:06, Petr Beneš wrote:
> > On Thu, Jun 6, 2024 at 9:24 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
> >>> @@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
> >>>      struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
> >>>
> >>>      mm_lock_init(&d->arch.altp2m_list_lock);
> >>> -    for ( i = 0; i < MAX_ALTP2M; i++ )
> >>> +    d->arch.altp2m_p2m = xzalloc_array(struct p2m_domain *, 
> >>> d->nr_altp2m);
> >>> +
> >>> +    if ( !d->arch.altp2m_p2m )
> >>> +        return -ENOMEM;
> >>
> >> This isn't really needed, is it? Both ...
> >>
> >>> +    for ( i = 0; i < d->nr_altp2m; i++ )
> >>
> >> ... this and ...
> >>
> >>>      {
> >>>          d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
> >>>          if ( p2m == NULL )
> >>> @@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
> >>>      unsigned int i;
> >>>      struct p2m_domain *p2m;
> >>>
> >>> -    for ( i = 0; i < MAX_ALTP2M; i++ )
> >>> +    if ( !d->arch.altp2m_p2m )
> >>> +        return;
>
> I'm sorry, the question was meant to be on this if() instead.
>
> >>> +    for ( i = 0; i < d->nr_altp2m; i++ )
> >>>      {
> >>>          if ( !d->arch.altp2m_p2m[i] )
> >>>              continue;
> >>> @@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
> >>>          d->arch.altp2m_p2m[i] = NULL;
> >>>          p2m_free_one(p2m);
> >>>      }
> >>> +
> >>> +    XFREE(d->arch.altp2m_p2m);
> >>>  }
> >>
> >> ... this ought to be fine without?
> >
> > Could you, please, elaborate? I honestly don't know what you mean here
> > (by "this isn't needed").
>
> I hope the above correction is enough?

I'm sorry, but not really? I feel like I'm blind but I can't see
anything I could remove without causing (or risking) crash.

P.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.