[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/2] xen/arm: Add defensive barrier in get_cycles for Arm64



Hi Wei,

On 14/01/2021 00:18, Wei Chen wrote:
I think
if we introduce an empty helper for Arm32, we can reduce the other
chunk inside get_cycles. In addition, if we need to do the same work
for Arm32 in the future, we may not need to modify get_cycles.
I don't really follow this. I wasn't asking about
read_cntpct_enforce_ordering(). Instead I was asking about
read_cntpct_stable() which looks like you just split get_cycles().

This change looks completely unrelated to the purpose of this series. I
am not going to ask to split it, but I think this should be explained in
the commit message.


Yes, I forgot to explain this changes in the commit message.

When I was trying to add read_cntpct_enforce_ordering into get_cycles,
there were three ways I can do:
1. Add read_cntpct_enforce_ordering to the end of each branch
2. Add a new cycles_t variable and record value of each branch. Using
     the function end as unique return point. And then we can add
     read_cntpct_enforce_ordering to the end of get_cycles.
3. Don't touch the get_cycles function body, just rename it and create
     another helper named get_cycles to include original function and
     read_cntpct_enforce_ordering

Personally, I prefer the #3, so I changed the function like this.

I agree. Thanks for the explanation! In which case:

Reviewed-by: Julien Grall <jgrall@xxxxxxxxxx>



How about adding the explanation in the end of commit message like this:
"
.... If this is done in Linux it will need to be also done in Xen.

To avoid adding read_cntpct_enforce_ordering everywhere, we introduced
a new helper read_cntpct_stable to replace original get_cycles, and turn
get_cycles to a wrapper which we can add read_cntpct_enforce_ordering
easily.
"

I have updated the commit message and committed the patch.

Thanks for your contribution.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.