[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] xen/dt: Remove loop in dt_read_number()


  • To: "Orzel, Michal" <michal.orzel@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Alejandro Vallejo <agarciav@xxxxxxx>
  • Date: Wed, 18 Jun 2025 13:27:07 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mWPZS55WWSHLHUZtdI+rkuAR3JOR9CRD/RZ+ZiQf0yw=; b=ugsu8xV/hT0QdBmmx4x5JQrUv4CdCpZzy5nM4JS35xWUQ/h6Zgckl9MyXQT8f+TzJAU+YQtrrWrdw9NWb4+dqb4C0uQwPF/ZgAbBafeOKuuugeqh/M7KMmRkD6t59qrYK5doLiue7RMh1OC04/XrhXi7/eP+bZhoFaFH2laq2SyYVY2bm0NpptGaHeF0dpZYpCobzwEpY0ZMZF8XW4hignDTDdtGSUCmhS+zLwoqZk9nzl+p9EQbjbQ5VYz94upX3G30QyHxH/C4SgCCW3hNx7dncuRdIEX3WZklxqYI+CYPOGl+chwT0ozL8eS1Q16HhKzIT5GU6J+AdZGd+tQwRw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=u5bVQbkm7PkhixT3QmX7bPFGazmuCDfhRw/P/PaNCb8Bnwkbpohcq7P2XFDJk/bkXsZYBw8RMKtKkDU1iGYQ+8Wvp0x4dS3oLtNFcXXoct5R09cQtNGZgAXe/NPYk4dWOi0N4lX6vzoNJrr5nidlyv8mAVH7Jf61AS/AcpRxKzS5Ao9YGPBgGrxT/Mho0cgPAdbNkhqRuI8aB0YZWhvyra8LYbi0NL4rP89LLbrzDpurco9u3PFzD8lYzQDxmprlkjO6coYztucml4N0nxItmT25/Bo0iDo8n0QwGHMvQNg1PqY8dm84n/mFLkA5Gg0bdweh8ljZfRii1GiyhRp4UA==
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>
  • Delivery-date: Wed, 18 Jun 2025 11:27:38 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed Jun 18, 2025 at 9:06 AM CEST, Michal Orzel wrote:
>
>
> On 17/06/2025 19:13, Alejandro Vallejo wrote:
>> The DT spec declares only two number types for a property: u32 and u64,
>> as per Table 2.3 in Section 2.2.4. Remove unbounded loop and replace
>> with a switch statement. Default to a size of 1 cell in the nonsensical
>> size case, with a warning printed on the Xen console.
>> 
>> Suggested-by: Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>
>> Signed-off-by: Alejandro Vallejo <agarciav@xxxxxxx>
>> ---
>> v2:
>>   * Added missing `break` on the `case 2:` branch and added 
>> ASSERT_UNREACHABLE() to the deafult path
>> ---
>>  xen/include/xen/device_tree.h | 17 ++++++++++++++---
>>  1 file changed, 14 insertions(+), 3 deletions(-)
>> 
>> diff --git a/xen/include/xen/device_tree.h b/xen/include/xen/device_tree.h
>> index 75017e4266..2ec668b94a 100644
>> --- a/xen/include/xen/device_tree.h
>> +++ b/xen/include/xen/device_tree.h
>> @@ -261,10 +261,21 @@ void intc_dt_preinit(void);
>>  /* Helper to read a big number; size is in cells (not bytes) */
>>  static inline u64 dt_read_number(const __be32 *cell, int size)
>>  {
>> -    u64 r = 0;
>> +    u64 r = be32_to_cpu(*cell);
>> +
>> +    switch ( size )
>> +    {
>> +    case 1:
>> +        break;
>> +    case 2:
>> +        r = (r << 32) | be32_to_cpu(cell[1]);
>> +        break;
>> +    default:
>> +        // Nonsensical size. default to 1.
> I wonder why there are so many examples of device trees in Linux with
> #address-cells = <3>? Also, libfdt defines FDT_MAX_NCELLS as 4 with comment:
> "maximum value for #address-cells and #size-cells" but I guess it follows the
> IEE1275 standard and DT spec "is loosely related" to it.
>
> ~Michal

I could imagine DT's encoding CHERI 64bit capabilities as addresses, which could
require 4 cells. Needless to say, this function wouldn't even be in the top 10
biggest problems in making Xen happy running on a CHERI-capable processor.

As for #address-cells = <3>, I really can't think of a reason except testing
theoretical corner cases.  

Cheers,
Alejandro



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.