[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen/pci: try to reserve MCFG areas earlier
On 9/10/19 5:46 AM, Igor Druzhinin wrote: > On 10/09/2019 02:47, Boris Ostrovsky wrote: >> On 9/9/19 5:48 PM, Igor Druzhinin wrote: >>> On 09/09/2019 20:19, Boris Ostrovsky wrote: >>>> On 9/8/19 7:37 PM, Igor Druzhinin wrote: >>>>> On 09/09/2019 00:30, Boris Ostrovsky wrote: >>>>>> On 9/8/19 5:11 PM, Igor Druzhinin wrote: >>>>>>> On 08/09/2019 19:28, Boris Ostrovsky wrote: >>>>>>>> Would it be possible for us to parse MCFG ourselves in pci_xen_init()? >>>>>>>> I >>>>>>>> realize that we'd be doing this twice (or maybe even three times since >>>>>>>> apparently both pci_arch_init() and acpi_ini() do it). >>>>>>>> >>>>>>> I don't thine it makes sense: >>>>>>> a) it needs to be done after ACPI is initialized since we need to parse >>>>>>> it to figure out the exact reserved region - that's why it's currently >>>>>>> done in acpi_init() (see commit message for the reasons why) >>>>>> Hmm... We should be able to parse ACPI tables by the time >>>>>> pci_arch_init() is called. In fact, if you look at >>>>>> pci_mmcfg_early_init() you will see that it does just that. >>>>>> >>>>> The point is not to parse MCFG after acpi_init but to parse DSDT for >>>>> reserved resource which could be done only after ACPI initialization. >>>> OK, I think I understand now what you are trying to do --- you are >>>> essentially trying to account for the range inserted by >>>> setup_mcfg_map(), right? >>>> >>> Actually, pci_mmcfg_late_init() that's called out of acpi_init() - >>> that's where MCFG areas are properly sized. >> pci_mmcfg_late_init() reads the (static) MCFG, which doesn't need DSDT >> parsing, does it? setup_mcfg_map() OTOH does need it as it uses data from >> _CBA (or is it _CRS?), and I think that's why we can't parse MCFG prior to >> acpi_init(). So what I said above indeed won't work. >> > No, it uses is_acpi_reserved() (it's called indirectly so might be well > hidden) to parse DSDT to find a reserved resource in it and size MCFG > area accordingly. Right, I see it. Thanks for the explanation. > setup_mcfg_map() is called for every root bus > discovered and indeed tries to evaluate _CBA but at this point > pci_mmcfg_late_init() has already finished MCFG registration for every > cold-plugged bus (which information is described in MCFG table) so those > calls are dummy. > >>> setup_mcfg_map() is mostly >>> for bus hotplug where MCFG area is discovered by evaluating _CBA method; >>> for cold-plugged buses it just confirms that MCFG area is already >>> registered because it is mandated for them to be in MCFG table at boot time. >>> >>>> The other question I have is why you think it's worth keeping >>>> xen_mcfg_late() as a late initcall. How could MCFG info be updated >>>> between acpi_init() and late_initcalls being run? I'd think it can only >>>> happen when a new device is hotplugged. >>>> >>> It was a precaution against setup_mcfg_map() calls that might add new >>> areas that are not in MCFG table but for some reason have _CBA method. >>> It's obviously a "firmware is broken" scenario so I don't have strong >>> feelings to keep it here. Will prefer to remove in v2 if you want. >> Isn't setup_mcfg_map() called before the first xen_add_device() which is >> where you are calling xen_mcfg_late()? >> > setup_mcfg_map() calls are done in order of root bus discovery which > happens *after* the previous root bus has been enumerated. So the order > is: call setup_mcfg_map() for root bus 0, find that > pci_mmcfg_late_init() has finished MCFG area registration, perform PCI > enumeration of bus 0, call xen_add_device() for every device there, call > setup_mcfg_map() for root bus X, etc. Ah, yes. Multiple busses. If that's the case then why don't we need to call xen_mcfg_late() for the first device on each bus? -boris _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |