[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] bogus HPET initialization order on x86
>>> "Wei, Gang" 03/10/11 4:36 AM >>> >Jan Beulich wrote on 2011-03-09: >>> Further, with the channel getting enabled (down the >>> hpet_fsb_cap_lookup() call tree) before hpet_events[] gets fully >>> initialized, I'd also think it should be possible to hit the >>> spurious warning in hpet_interrupt_handler() just because of >>> improper initialization order. >>> >>> If that's all impossible in practice, adding some meaningful >>> comments to the code to describe why this is so would be much appreciated. > >For normal booting case, hpet interrupts should not come before dom0 start >booting and pass ACPI tables to hypervisor, so >that's impossible in practice >in this case. The legacy code path gets called from the timer interrupt, so what would prevent this path being taken before Dom0 boots? The non-legacy interrupts also get fully set up before Dom0 boots - I don't think I saw anything that specifically enables these interrupts upon arrival of some ACPI data from Dom0. >For S3 resume case, the hpet_broadcast_init() is called in >device_power_up()->time_resume()->disable_pit_irq(), when >the irq was >disabled and all non-boot cpus not enabled. So that's also impossible. No, the resume path is not that interesting, albeit it also looks bogus to initialize spinlock and rwlock again in that path, but that'll be taken care of once init and resume paths get split. >Do I miss any other cases? If not, I will cook a patch to add the required >comments along with pulling spinlock/rwlock >initialization before >.event_handler settings. Yes, please, though I could also fold this in my bigger cleanup patch (properly separating init a resume paths) if I still didn't understand some aspect of it and this turns out a mere cleanup. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |