[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [ANNOUNCE] Third release candidates for 4.0.4 and 4.1.3



I did a testing for 4.1.3-rc3 with Linux 3.4.6 as dom0.
I met the following 3 issues.
(1) Can't get deep C-state status via "xenpm" command.
(2) Can't get P-state info via "xenpm" command.
(3) HVM S3 resume error.
No. 1 and No. 2 exists on 4.1.3-rc3, but don't exist on latest xen-unstable 
tree.
No. 3 issue exists on both 4.1.3-rc3 and latest xen-unstable tree.
CCed Konrad.
As for the PM features, could it be the reason that xen 4.1.3 can't work very 
well with Linux 3.4.6 as Dom0?

Details:
(1) C-state issue
Only C0 and C1 C-state via "xenpm get-cpuidle-states" command line; no info for 
C2 or C3.
But xen-unstable tree (CS #25643) can have C0, C1, C2, C3.
----log here for 4.1.3------
[root@vt-nhm7 ~]# xenpm get-cpuidle-states 1
Max C-state: C7

cpu id               : 1
total C-states       : 2
idle time(ms)        : 477336
C0                   : transition [00000000000000000001]
                       residency  [00000000000000008629 ms]
C1                   : transition [00000000000000000001]
                       residency  [00000000000000477336 ms]
pc3                  : [00000000000000000000 ms]
pc6                  : [00000000000000000000 ms]
pc7                  : [00000000000000000000 ms]
cc3                  : [00000000000000000000 ms]
cc6                  : [00000000000000000000 ms]
------------
---log for 4.2-unstable tree ----
[root@vt-nhm7 ~]# xenpm get-cpuidle-states 1
Max possible C-state: C7

cpu id               : 1
total C-states       : 4
idle time(ms)        : 139164
C0                   : transition [                3189]
                       residency  [               67719 ms]
C1                   : transition [                 162]
                       residency  [                4739 ms]
C2                   : transition [                   9]
                       residency  [                  44 ms]
C3                   : transition [                3018]
                       residency  [               75951 ms]
pc2                  : [                   0 ms]
pc3                  : [                 758 ms]
pc6                  : [               28187 ms]
pc7                  : [                   0 ms]
cc3                  : [                1335 ms]
cc6                  : [               71613 ms]
cc7                  : [                   0 ms]
----

(2) P-state issue
No P-state info via "xenpm get-cpufreq-states" command line.
But xen-unstable tree (CS#25643 ) works fine for P-state.
--- log here for 4.1.3--
[root@vt-nhm7 ~]# xenpm get-cpufreq-states 1
[root@vt-nhm7 ~]#
--------------

--- log for 4.2-unstable -----
[root@vt-nhm7 ~]# xenpm get-cpufreq-states 1
cpu id               : 1
total P-states       : 10
usable P-states      : 10
current frequency    : 1596 MHz
P0         [2794 MHz]: transition [                   1]
                       residency  [                   0 ms]
P1         [2793 MHz]: transition [                   0]
                       residency  [                   0 ms]
P2         [2660 MHz]: transition [                   0]
                       residency  [                   0 ms]
P3         [2527 MHz]: transition [                   0]
                       residency  [                   0 ms]
P4         [2394 MHz]: transition [                   0]
                       residency  [                   0 ms]
P5         [2261 MHz]: transition [                   0]
                       residency  [                   0 ms]
P6         [2128 MHz]: transition [                   0]
                       residency  [                   0 ms]
P7         [1995 MHz]: transition [                   0]
                       residency  [                   0 ms]
P8         [1862 MHz]: transition [                   0]
                       residency  [                   0 ms]
*P9        [1596 MHz]: transition [                   1]
                       residency  [                   7 ms]
-----------

(3) HVM S3 resume issue
Error occurred when resuming from S3.
Both 4.1.3 and xen-unstable tree have the same issue.
-----console log for S3 resume ------------
Red Hat Enterprise Linux Server release 6.2 (Santiago)
Kernel 2.6.32-220.el6.x86_64 on an x86_64

rhel6u2-GA.guest login: eth0: no IPv6 routers present
PM: Syncing filesystems ... done.
Freezing user space processes ... (elapsed 0.00 seconds) done.
Freezing remaining freezable tasks ... (elapsed 0.00 seconds) done.
Suspending console(s) (use no_console_suspend to debug)
ACPI: Preparing to enter system sleep state S3
Disabling non-boot CPUs ...
Back to C!
ACPI: Waking up from system sleep state S3
ata_piix 0000:00:01.1: restoring config space at offset 0x1 (was 0x2800000, 
writing 0x2800005)
xen-platform-pci 0000:00:03.0: PCI INT A -> GSI 28 (level, low) -> IRQ 28
WARNING: g.e. still in use!
WARNING: leaking g.e. and page still in use!
WARNING: g.e. still in use!
WARNING: leaking g.e. and page still in use!
WARNING: g.e. still in use!
WARNING: leaking g.e. and page still in use!
pm_op(): platform_pm_resume+0x0/0x50 returns -19
PM: Device i8042 failed to resume: error -19
Restarting tasks ... done.
---------if I use keyboard to type some keys and "Ctrl+C" after resume, it will 
show the following call trace-----------
INFO: task flush-202:0:342 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
flush-202:0   D 0000000000000000     0   342      2 0x00000000
 ffff88003d83b5b0 0000000000000046 ffff88003d83b620 ffffffff811238a1
 ffff8800000106c0 ffff880000000001 ffff88003adf8100 ffff88003d83b748
 ffff880037c45b38 ffff88003d83bfd8 000000000000f4e8 ffff880037c45b38
Call Trace:
 [<ffffffff811238a1>] ? __alloc_pages_nodemask+0x111/0x940
 [<ffffffff81090ede>] ? prepare_to_wait+0x4e/0x80
 [<ffffffffa006009d>] do_get_write_access+0x29d/0x520 [jbd2]
 [<ffffffff81090c30>] ? wake_bit_function+0x0/0x50
 [<ffffffffa0060471>] jbd2_journal_get_write_access+0x31/0x50 [jbd2]
 [<ffffffffa00b4b88>] __ext4_journal_get_write_access+0x38/0x80 [ext4]
 [<ffffffffa00b6a42>] ext4_mb_mark_diskspace_used+0xf2/0x300 [ext4]
 [<ffffffffa00bdc19>] ext4_mb_new_blocks+0x2a9/0x560 [ext4]
 [<ffffffffa00b0f1e>] ? ext4_ext_find_extent+0x2be/0x320 [ext4]
 [<ffffffffa00b3fd3>] ext4_ext_get_blocks+0x1113/0x1a10 [ext4]
 [<ffffffff81250132>] ? generic_make_request+0x2b2/0x5c0
 [<ffffffffa0091335>] ext4_get_blocks+0xf5/0x2a0 [ext4]
 [<ffffffff811271a5>] ? pagevec_lookup_tag+0x25/0x40
 [<ffffffffa00922fc>] mpage_da_map_blocks+0xac/0x450 [ext4]
 [<ffffffffa005f3c5>] ? jbd2_journal_start+0xb5/0x100 [jbd2]
 [<ffffffffa0092f37>] ext4_da_writepages+0x2f7/0x660 [ext4]
 [<ffffffff81126351>] do_writepages+0x21/0x40
 [<ffffffff811a046d>] writeback_single_inode+0xdd/0x2c0
 [<ffffffff811a08ae>] writeback_sb_inodes+0xce/0x180
 [<ffffffff811a0a0b>] writeback_inodes_wb+0xab/0x1b0
 [<ffffffff811a0dab>] wb_writeback+0x29b/0x3f0
 [<ffffffff814eca40>] ? thread_return+0x4e/0x77e
[<ffffffff8107cc02>] ? del_timer_sync+0x22/0x30
 [<ffffffff811a1099>] wb_do_writeback+0x199/0x240
 [<ffffffff811a11a3>] bdi_writeback_task+0x63/0x1b0
 [<ffffffff81090ab7>] ? bit_waitqueue+0x17/0xd0
 [<ffffffff81134d40>] ? bdi_start_fn+0x0/0x100
 [<ffffffff81134dc6>] bdi_start_fn+0x86/0x100
 [<ffffffff81134d40>] ? bdi_start_fn+0x0/0x100
 [<ffffffff81090886>] kthread+0x96/0xa0
 [<ffffffff8100c14a>] child_rip+0xa/0x20
 [<ffffffff810907f0>] ? kthread+0x0/0xa0
 [<ffffffff8100c140>] ? child_rip+0x0/0x20
-------------------------
(END of this email.)


Best Regards,
     Yongjie (Jay)

> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx
> [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of Keir Fraser
> Sent: Sunday, July 22, 2012 11:42 PM
> To: xen-devel@xxxxxxxxxxxxx
> Subject: [Xen-devel] [ANNOUNCE] Third release candidates for 4.0.4 and
> 4.1.3
> 
> Folks,
> 
> I have just tagged 3rd release candidates for 4.0.4 and 4.1.3:
> 
> http://xenbits.xen.org/staging/xen-4.0-testing.hg (tag 4.0.4-rc3)
> http://xenbits.xen.org/staging/xen-4.1-testing.hg (tag 4.1.3-rc3)
> 
> Please test! Assuming nothing crops up, we will make these the official
> point releases at the end of this month.
> 
>  -- Keir
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.