|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 3/3] x86/smt: Support for enabling/disabling SMT at runtime
On 03/04/2019 11:44, Jan Beulich wrote:
>>>> On 03.04.19 at 12:17, <andrew.cooper3@xxxxxxxxxx> wrote:
>> On 03/04/2019 10:33, Jan Beulich wrote:
>>>>>> On 02.04.19 at 21:57, <andrew.cooper3@xxxxxxxxxx> wrote:
>>>> Slightly RFC. I'm not very happy with the contination situation, but
>>>> -EBUSY
>>>> is the preexisting style and it seems like it is the only option from
>>>> tasklet
>>>> context.
>>> Well, offloading the re-invocation to the caller isn't really nice.
>>> Looking at the code, is there any reason why couldn't use
>>> the usual -ERESTART / hypercall_create_continuation()? This
>>> would require a little bit of re-work, in particular to allow
>>> passing the vCPU into hypercall_create_continuation(), but
>>> beyond that I can't see any immediate obstacles. Though
>>> clearly I wouldn't make this a prereq requirement for the work
>>> here.
>> The problem isn't really the ERESTART. We could do some plumbing and
>> make it work, but the real problem is that I can't stash the current cpu
>> index in the sysctl data block across the continuation point.
>>
>> At the moment, the loop depends on, once all CPUs are in the correct
>> state, getting through the for_each_present_cpu() loop without taking a
>> further continuation.
> But these are two orthogonal things: One is how to invoke the
> continuation, and the other is where the continuation is to
> resume from. I think the former is more important to address,
> as it affects how the tools side code needs to look like.
Right, but -EBUSY is consistent with how the single online/offline ops
function at the moment, which is why I reused it here.
>
>>>> + for_each_present_cpu ( cpu )
>>>> + {
>>>> + if ( cpu == 0 )
>>>> + continue;
>>> Is this special case really needed? If so, perhaps worth a brief
>>> comment?
>> Trying to down cpu 0 is a hard -EINVAL.
> But here we're on the CPU-up path. Plus, for eventually supporting
> the offlining of CPU 0, it would feel slightly better if you used
> smp_processor_id() here.
Are there any processors where you can actually take CPU 0 offline? Its
certainly not possible on any Intel or AMD CPUs.
While I can appreciate the theoretical end goal, it isn't a reality and
I see no signs of that changing. Xen very definitely cannot take CPU 0
offline, nor can hardware, and I don't see any value in jumping through
hoops for an end goal which doesn't exist.
>>>> + if ( cpu >= max_cpus )
>>>> + break;
>>>> +
>>>> + if ( x86_cpu_to_apicid[cpu] & sibling_mask )
>>>> + ret = cpu_up_helper(_p(cpu));
>>> Shouldn't this be restricted to CPUs a sibling of which is already
>>> online? And widened at the same time, to also online thread 0
>>> if one of the other threads is already online?
>> Unfortunately, that turns into a rats nest very very quickly, which is
>> why I gave up and simplified the semantics to strictly "this shall
>> {of,off}line the nonzero siblings threads".
> Okay, if that's the intention, then I can certainly live with this.
> But it needs to be called out at the very least in the public header.
> (It might be worthwhile setting up a flag right away for "full"
> behavior, but leave acting upon it unimplemented). It also wouldn't
> hurt if the patch description already set expectations accordingly.
>
> Then again, considering your "maxcpus=" related question,
> it would certainly be odd for people to see non-zero threads
> come online here when they've intentionally left entire cores
> or nodes offline for whatever reason. Arguably that's not
> something to expect people would commonly do, and hence it
> may not be worth wasting meaningful extra effort on. But as
> above, and such "oddities" should be spelled out, such that it
> can be recognized that they're not oversights.
And we come back to Xen's perennial problem of having no documentation.
I'll see if I can find some time to put some Sphinx/RST together for this.
As for the maxcpus behaviour, I think that is sufficiently niche to
debugging circumstances only that perhaps we can ignore it. I certainly
don't expect to see maxcpus= used in production.
~Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |