[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/5] x86/time: implement tsc as clocksource



>>> On 22.03.16 at 21:40, <joao.m.martins@xxxxxxxxxx> wrote:
> On 03/22/2016 04:02 PM, Jan Beulich wrote:
>>>>> On 22.03.16 at 16:51, <joao.m.martins@xxxxxxxxxx> wrote:
>>> On 03/22/2016 12:46 PM, Jan Beulich wrote:
>>>>>>> On 22.03.16 at 13:41, <joao.m.martins@xxxxxxxxxx> wrote:
>>>>>>
>>>>> I found out one issue in the tsc clocksource initialization path with 
>>>>> respect to
>>>>> the reliability check. This check is running when initializing platform 
>>>>> timer,
>>>>> but not all CPUS are up at that point (init_xen_time()) which means that 
>>>>> the
>>>>> check will always succeed. So for clocksource=tsc I need to defer 
>>>>> initialization
>>>>> to a later point (on verify_tsc_reliability()) and if successful switch 
>>>>> to that
>>>>> clocksource. Unless you disagree, v2 would have this and just requires one
>>>>> additional preparatory change prior to this patch.
>>>>
>>>> Hmm, that's suspicious when thinking about CPU hotplug: What
>>>> do you intend to do when a CPU comes online later, failing the
>>>> check?
>>> Good point, but I am not sure whether that would happen. The initcall
>>> verify_tsc_reliability seems to be called only for the boot processor. Or 
>>> are
>>> you saying that it's case that initcalls are called too when hotplugging 
>>> cpus
>>> later on? If that's the case then my suggestion wouldn't work as you point 
>>> out -
>>> or rather without having runtime switching support as you point out below.
>> 
>> Looks like I didn't express myself clearly enough: "failing the check"
>> wasn't meant to imply the failure would actually occur, but rather
>> that failure would occur if the check was run on that CPU. IOW the
>> case of a CPU getting hotplugged which doesn't satisfy the needs
>> for selecting TSC as the clock source.
> Ah, I see. I believe this wouldn't be an issue with the current rendezvous
> mechanism (std_rendezvous), as the delta is computed between local_tsc_stamp
> taken in that (hotplugged) CPU in the calibration and the rdtsc() in the 
> guest
> same CPU, even though having CPU0 TSC (master) as system_time.
> 
> However it can be a problem with PVCLOCK_TSC_STABLE_BIT as the hotplugged CPU
> could have its TSC drifted, and since setting this flag relies on
> synchronization of TSCs we would need to clear the flag enterily.

Except that we can't, after guests already got started, validly clear
that flag afaics. The only option I see here would be to never set
this flag if CPU hotplug is possible - by looking at the hot pluggable
CPU count and, if non-zero, perhaps allowing a command line
override to indicate no hotplug is intended (it may well be that such
is already implicitly available).

>>>> So far we don't do runtime switching of the clock source,
>>>> and I'm not sure that's a good idea to do when there are running
>>>> guests.
>>> Totally agree, but I would be proposing to be at initialization phase where
>>> there wouldn't be guests running. We would start with HPET, then only 
>>> switch 
>>> to
>>> TSC if that check has passed (and would happen once).
>>>
>>> Simpler alternative would be to call init_xen_time after all CPUs are 
>>> brought up
>>> (and would also keep this patch as is), but I am not sure about the
>>> repercussions of that.
>> 
>> I don't see how either would help with the hotplug case.
> This was in response to what I thought you meant with your earlier question
> (which I misunderstood). But my question is still valid I believe. The 
> reason
> for choosing between my suggested approaches is that tsc_check_reliability()
> requires all CPUs up so that the check is correctly performed.

Sure - it seems quite obvious that all boot time available CPUs
should be checked.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.