[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] HVMlite ABI specification DRAFT B + implementation outline



El 9/2/16 a les 13:10, Jan Beulich ha escrit:
>>>> On 09.02.16 at 12:58, <roger.pau@xxxxxxxxxx> wrote:
>> El 9/2/16 a les 11:56, Andrew Cooper ha escrit:
>>> On 08/02/16 19:03, Roger Pau Monnà wrote:
>>>>  * PCI Express MMCFG: if Xen is able to identify any of these regions at 
>>>> boot
>>>>    time they will also be made available to the guest at the same position
>>>>    in it's physical memory map. It is possible that Xen will trap accesses 
>>>> to
>>>>    those regions, but a guest should be able to use the native 
>>>> configuration
>>>>    mechanism in order to interact with this configuration space. If the
>>>>    hardware domain reports the presence of any of those regions using the
>>>>    PHYSDEVOP_pci_mmcfg_reserved hypercall Xen will also all guest access to
>>>>    them.
>>>>
>>>>  * PCI BARs: it's not possible for Xen to know the position of the BARs of
>>>>    the PCI devices without hardware domain interaction.
>>>
>>> Xen requires no dom0 interaction to find all information like this for
>>> devices in segment 0 (i.e. all current hardware).  Segments other than 0
>>> may have their MMCONF regions expressed in AML only.
>>
>> Thanks for the comments, please bear with me. I think we are mixing two
>> things here, one is the MMCFG areas, and the other one are the BARs of
>> each PCI device.
>>
>> AFAIK MMCFG areas are described in the 'MCFG' ACPI table, which is
>> static and Xen should be able to parse on it's own. Then I'm not sure
>> why PHYSDEVOP_pci_mmcfg_reserved is needed at all.
> 
> Because there are safety nets: Xen and Linux consult the memory
> map to determine whether actually using what the firmware say is
> an MMCFG area is actually safe. Linux in addition checks ACPI
> tables, and this is what Xen can't do (as it require an AML
> interpreter), hence the hypercall to inform the hypervisor.

Hm, I guess I'm overlooking something, but I think Xen checks the ACPI
tables, see xen/arch/x86/x86_64/mmconfig-shared.c:400:

    if (pci_mmcfg_check_hostbridge()) {
        unsigned int i;

        pci_mmcfg_arch_init();
        for (i = 0; i < pci_mmcfg_config_num; ++i)
            if (pci_mmcfg_arch_enable(i))
                valid = 0;
    } else {
        acpi_table_parse(ACPI_SIG_MCFG, acpi_parse_mcfg);
        pci_mmcfg_arch_init();
        valid = pci_mmcfg_reject_broken();
    }

Which AFAICT suggests that Xen is indeed able to parse the 'MCFG' table,
which contains the list of MMCFG regions on the system. Is there any
other ACPI table where this information is reported that I'm missing?

This would suggest then that the hypercall is no longer needed.

>> Then for BARs you need to know the specific PCI devices, which are
>> enumerated in the DSDT or similar ACPI tables, which are not static, and
>> thus cannot be parsed by Xen. We could do a brute force scan of the
>> whole PCI bus using the config registers, but that seems hacky. And as
>> Boris said we need to keep the usage of PHYSDEVOP_pci_device_add in
>> order to notify Xen of the PXM information.
> 
> We can only be sure to have access to segment 0 at boot time.
> Other segments may be accessible via MMCFG only, and whether
> we can safely use the respective MMCFG we won't know - see
> above - early enough. That's also the reason why today we do a
> brute force scan only on segment 0.

Andrew suggested that all current hardware only uses segment 0, so it
seems like covering segment 0 is all that's needed for now. Are OSes
already capable of dealing with segments different than 0 that only
support MMCFG accesses?

And in which case, if a segment != 0 appears, and it only supports MMCFG
accesses, the MCFG table should contain an entry for this area, which
should allow us to properly scan it.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.