[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/arm: vgic-v3: Fix the typo of GICD IRQ active status range



Hi Julien,

On 2020/1/7 23:12, Julien Grall wrote:


On 07/01/2020 12:55, Wei Xu wrote:
Hi Julien,
As only one entity should manage the UART (i.e Xen or Dom0), we today assume this will be managed by Xen. Xen should expose a partial virtual UART (only a few registers are emulating) to dom0 in replacement.

This is usually done by the UART driver. Looking at the log you pasted in a separate e-mail:

(XEN) Platform: Generic System
(XEN) Unable to initialize acpi uart: -9
(XEN) Bad console= option 'dtuart'

So Xen didn't manage to initialize the uart. The -9 suggests, Xen didn't find a driver for your UART. At the moment, Xen is only able to detect pl011, sbsa, sbsa32 UART for ACPI. What is the type of the UART used on your platform?


Thanks!
Got it.
Our UART is 8250.

You would need to teach the 8250 driver how to initialize the UART with ACPI. It is not very difficult to do it, have a look at the pl011 version.


Thanks!
It is not working even I changed the condition to " if ( acpi_disabled ) ".

Doh, thank you for spotting the extra !.

My grub 2.04 configuration is as below:

xen_hypervisor /xen dom0_mem=4G acpi=force loglvl=all guest_loglvl=all xen_module /Image rdinit=/init acpi=force noinitrd root=/dev/sdb1 rw

The log with the condition " if ( acpi_disabled ) " is as following:

     (XEN) Adding cpu 126 to runqueue 0
     (XEN) Adding cpu 127 to runqueue 0
(XEN) alternatives: Patching with alt table 00000000002d4f48 -> 00000000002d5764
     (XEN) *** LOADING DOMAIN 0 ***
     (XEN) Loading d0 kernel from boot module @ 0000000016257000
     (XEN) Allocating 1:1 mappings totalling 4096MB for dom0:
     (XEN) BANK[0] 0x00000008000000-0x00000010000000 (128MB)
     (XEN) BANK[1] 0x00000020000000-0x00000038000000 (384MB)
     (XEN) BANK[2] 0x00000050000000-0x00000080000000 (768MB)
     (XEN) BANK[3] 0x00202000000000-0x00202080000000 (2048MB)
     (XEN) BANK[4] 0x002020b0000000-0x002020c0000000 (256MB)
     (XEN) BANK[5] 0x00202600000000-0x00202620000000 (512MB)
     (XEN) Grant table range: 0x000000181c7000-0x00000018207000
     (XEN) Allocating PPI 16 for event channel interrupt
(XEN) Loading zImage from 0000000016257000 to 0000000008080000-0000000009981200
     (XEN) Loading d0 DTB to 0x000000000fe00000-0x000000000fe0025b
     (XEN) Initial low memory virq threshold set at 0x4000 pages.
     (XEN) Scrubbing Free RAM in background
     (XEN) Std. Loglevel: All
     (XEN) Guest Loglevel: All
(XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
     (XEN) Data Abort Trap. Syndrome=0x6
(XEN) Walking Hypervisor VA 0x10 on CPU0 via TTBR 0x00000000182ff000
     (XEN) 0TH[0x0] = 0x0000000018302f7f
     (XEN) 1ST[0x0] = 0x0000000018300f7f
     (XEN) 2ND[0x0] = 0x0000000000000000
     (XEN) CPU0: Unexpected Trap: Data Abort
     (XEN) ----[ Xen-4.13.0-rc  arm64  debug=y   Not tainted ]----
     (XEN) CPU:    0
     (XEN) PC:     00000000002b65c8 00000000002b65c8

Can you look with addr2line what this PC refers to?


Sorry for the late reply!
The PC refers to fdt_num_mem_rsv during init_done.
But the device_tree_flattened is NULL that the data abort happened.

So I added below changes and the XEN dom0 can be booted.

    diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
    index 1e83351..1ab80a1 100644
    --- a/xen/arch/arm/setup.c
    +++ b/xen/arch/arm/setup.c
    @@ -392,7 +392,8 @@ void __init discard_initial_modules(void)
                  !mfn_valid(maddr_to_mfn(e)) )
                 continue;

    -        dt_unreserved_regions(s, e, init_domheap_pages, 0);
    +       if( acpi_disabled )
    +           dt_unreserved_regions(s, e, init_domheap_pages, 0);
         }

Thank you so much!

Best Regards,
Wei

Cheers,




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.