[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] xen/arm: Introduce pmu_access parameter
Hi Stefano, On 01/09/2021 18:54, Stefano Stabellini wrote: On Wed, 1 Sep 2021, Julien Grall wrote:On 01/09/2021 14:10, Bertrand Marquis wrote:On 1 Sep 2021, at 13:55, Julien Grall <julien@xxxxxxx> wrote: Hi, On 01/09/2021 13:43, Michal Orzel wrote:Introduce new Xen command line parameter called "pmu_access". The default value is "trap": Xen traps PMU accesses. In case of setting pmu_access to "native", Xen does not trap PMU accesses allowing all the guests to access PMU registers. However, guests cannot make use of PMU overflow interrupts as PMU uses PPI which Xen cannot route to guests. This option is only intended for development and testing purposes. Do not use this in production system.I am afraid your option is not safe even in development system as a vCPU may move between pCPUs. However, even if we restricted the use to pinned vCPU *and* dedicated pCPU, I am not convinced that exposing an half backed PMU (the overflow interrupt would not work) to the guest is the right solution. This likely means the guest OS would need to be modified and therefore the usage of this option is fairly limited. So I think the first steps are: 1) Make the PPI work. There was some attempt in the past for it on xen-devel. You could have a look. 2) Provide PMU bindings With that in place, we can discuss how to expose the PMU even if it is unsafe in some conditions.With those limitations, using the PMU to monitor the system performances or on some specific use cases is still really useful. We are using that to do some benchmarks of Xen or of some applications to compare the behaviour to a native system or analyse the performances of Xen itself (hypercalls,context switch …etc)I also already had to write a patch almost exactly like this one to provide to customers a few months back.I understand this is useful for some setup and I am not trying to say we should not have a way to expose the PMU (even unsafely) in upstream. However, I think the option as it stands is too wide (this should be a per domain knob) and we should properly expose the PMU (interrupts, bindings...).I have never used PMU directly myself, only provided a patch similar to this one. But as far as I could tell the users were fully satisfied with it and it had no interrupts support either. Could it be that interrupts are not actually needed to read the perf counters, which is probably what users care about? You don't need the interrupts to read the perf counters. But AFAIU, you would have to poll at a regular interval yourself. There is also the question on how to catch the overflow? In regards to "this should be a per domain knob", in reality if you are doing PMU experiments you don't care if only one or all domains have access: you are working in a controlled environment trying to figure out if your setup meets the timing requirements. There are no security or safety concerns (those are different experiments) and there is no interference problems in the sense of multiple domains trying to access PMU at the same time -- you control the domains you decide which one is accessing them. > So I am in favor of a patch like this one because it actually satisfy our requirements. Even if we had per-domain support and interrupts support, I don't think they would end up being used. I have to disagree with that. There are valid use-cases where you may want to expose the PMU for using perf to a single domain because you know it is safe to do so. I appreciate this is may not be the use case for your users (yet), but I have seen (and used) it on x86. So to me this approach is short-sighed. TBH, this is not the first time I have seen patch for "let's expose the same feature to everyone because this is easy to do" and really dislike it. Exposing a new feature from the toolstack is easier than you think (baring the lack of reviews), this is a matter of creating a new flag a new option. This would make Xen a lot more flexible and enable more users... So as it stands: NAcked-by: Julien Grall <jgrall@xxxxxxxxxx> Cheers, -- Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |