|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v7 21/21] xl: vNUMA support
On Wed, Mar 11, 2015 at 03:12:06PM +0000, Ian Campbell wrote:
> On Mon, 2015-03-09 at 12:51 +0000, Wei Liu wrote:
>
> > +Each B<VNODE_CONFIG_OPTION> is a quoted key=value pair. Supported
> > +B<VNODE_CONFIG_OPTION>s are:
>
> Which of these are optional and which are mandatory? All mandatory?
>
Yes, all mandatory for now.
> > + if (!xlu_cfg_get_long (config, "maxmem", &l, 0))
> > + max_memkb1 = l * 1024;
>
> I think you should arrange that if the user has specified maxmem then it
> has already been parsed and is available in b_info->max_memkb for use in
> this function. Reparsing it just leaves scope to get out of sync.
>
> In particular this doesn't handle the defaulting of maxmem to memory.
>
This is exactly the reason I do this here. At this point
b_info->max_memkb is always set.
If user has specified maxmem we should check if the configuration he /
she specifies matches the vnuma value to avoid misconfiguration;
otherwise we just use vnuma memory as maxmem.
But see below regarding maxvcpus=.
> > +#define ABORT_IF_FAILED(str) \
> > + do { \
> > + if (endptr == value || val == ULONG_MAX) { \
>
> Looking at the uses, I think you mean str here where you have value, no?
>
Yes.
> Given that you exit() here I think you could write this as a helper
> function which did the parsing for you too, i.e. folds in the strtoul.
>
OK.
> > + if (total_cpus != b_info->max_vcpus) {
> > + fprintf(stderr, "xl: vnuma vcpus and maxvcpus= mismatch\n");
> > + exit(1);
> > + }
>
> Can't we do as we do with memory here and set max_vcpus if it isn't
> already configured?
>
This can be done. It will require moving parse_vnuma_config to
beginning of parse_config_data and stubbing out parsing of maxvcpus
and maxmem if vnuma configuration presents.
Wei.
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |