[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [v9][PATCH 06/16] hvmloader/pci: disable all pci devices conflicting with rdm
When allocating mmio address for PCI bars, mmio may overlap with reserved regions. Currently we just want to disable these associate devices simply to avoid conflicts but we will reshape current mmio allocation mechanism to fix this completely. CC: Keir Fraser <keir@xxxxxxx> CC: Jan Beulich <jbeulich@xxxxxxxx> CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> CC: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> CC: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> CC: Ian Campbell <ian.campbell@xxxxxxxxxx> CC: Wei Liu <wei.liu2@xxxxxxxxxx> Signed-off-by: Tiejun Chen <tiejun.chen@xxxxxxxxx> --- v9: * A little improvement to code implementation but again, its still argued about this solution. v8: * Based on current discussion its hard to reshape the original mmio allocation mechanism but we haven't a good and simple way to in short term. So instead, we don't bring more complicated to intervene that process but still check any conflicts to disable all associated devices. v6 ~ v7: * Nothing is changed. v5: * Rename that field, is_64bar, inside struct bars with flag, and then extend to also indicate if this bar is already allocated. v4: * We have to re-design this as follows: #1. Goal MMIO region should exclude all reserved device memory #2. Requirements #2.1 Still need to make sure MMIO region is fit all pci devices as before #2.2 Accommodate the not aligned reserved memory regions If I'm missing something let me know. #3. How to #3.1 Address #2.1 We need to either of populating more RAM, or of expanding more highmem. But we should know just 64bit-bar can work with highmem, and as you mentioned we also should avoid expanding highmem as possible. So my implementation is to allocate 32bit-bar and 64bit-bar orderly. 1>. The first allocation round just to 32bit-bar If we can finish allocating all 32bit-bar, we just go to allocate 64bit-bar with all remaining resources including low pci memory. If not, we need to calculate how much RAM should be populated to allocate the remaining 32bit-bars, then populate sufficient RAM as exp_mem_resource to go to the second allocation round 2>. 2>. The second allocation round to the remaining 32bit-bar We should can finish allocating all 32bit-bar in theory, then go to the third allocation round 3>. 3>. The third allocation round to 64bit-bar We'll try to first allocate from the remaining low memory resource. If that isn't enough, we try to expand highmem to allocate for 64bit-bar. This process should be same as the original. #3.2 Address #2.2 I'm trying to accommodate the not aligned reserved memory regions: We should skip all reserved device memory, but we also need to check if other smaller bars can be allocated if a mmio hole exists between resource->base and reserved device memory. If a hole exists between base and reserved device memory, lets go out simply to try allocate for next bar since all bars are in descending order of size. If not, we need to move resource->base to reserved_end just to reallocate this bar. tools/firmware/hvmloader/pci.c | 81 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 81 insertions(+) diff --git a/tools/firmware/hvmloader/pci.c b/tools/firmware/hvmloader/pci.c index 5ff87a7..15ed9b2 100644 --- a/tools/firmware/hvmloader/pci.c +++ b/tools/firmware/hvmloader/pci.c @@ -38,6 +38,84 @@ uint64_t pci_hi_mem_start = 0, pci_hi_mem_end = 0; enum virtual_vga virtual_vga = VGA_none; unsigned long igd_opregion_pgbase = 0; +/* + * We should check if all valid bars conflict with RDM. + * + * Here we just need to check mmio bars in the case of non-highmem + * since the hypervisor can make sure RDM doesn't involve highmem. + */ +static void disable_conflicting_devices(void) +{ + uint8_t is_64bar; + uint32_t devfn, bar_reg, cmd, bar_data; + uint16_t vendor_id, device_id; + unsigned int bar, i; + uint64_t bar_sz; + bool is_conflict = false; + + for ( devfn = 0; devfn < 256; devfn++ ) + { + vendor_id = pci_readw(devfn, PCI_VENDOR_ID); + device_id = pci_readw(devfn, PCI_DEVICE_ID); + if ( (vendor_id == 0xffff) && (device_id == 0xffff) ) + continue; + + /* Check all bars */ + for ( bar = 0; bar < 7 && !is_conflict; bar++ ) + { + bar_reg = PCI_BASE_ADDRESS_0 + 4*bar; + if ( bar == 6 ) + bar_reg = PCI_ROM_ADDRESS; + + bar_data = pci_readl(devfn, bar_reg); + bar_data &= PCI_BASE_ADDRESS_MEM_MASK; + if ( !bar_data ) + continue; + + if ( bar_reg != PCI_ROM_ADDRESS ) + is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE | + PCI_BASE_ADDRESS_MEM_TYPE_MASK)) == + (PCI_BASE_ADDRESS_SPACE_MEMORY | + PCI_BASE_ADDRESS_MEM_TYPE_64)); + + /* Until here we never conflict high memory. */ + if ( is_64bar && pci_readl(devfn, bar_reg + 4) ) + continue; + + /* Just check mmio bars. */ + if ( ((bar_data & PCI_BASE_ADDRESS_SPACE) == + PCI_BASE_ADDRESS_SPACE_IO) ) + continue; + + bar_sz = pci_readl(devfn, bar_reg); + bar_sz &= PCI_BASE_ADDRESS_MEM_MASK; + + for ( i = 0; i < memory_map.nr_map && !is_conflict; i++ ) + { + if ( memory_map.map[i].type == E820_RESERVED ) + { + uint64_t reserved_start, reserved_size; + reserved_start = memory_map.map[i].addr; + reserved_size = memory_map.map[i].size; + if ( check_overlap(bar_data , bar_sz, + reserved_start, reserved_size) ) + { + /* Now disable the memory or I/O mapping. */ + printf("pci dev %02x:%x bar %02x : 0x%08x : conflicts " + "reserved resource so disable this device.!\n", + devfn>>3, devfn&7, bar_reg, bar_data); + cmd = pci_readw(devfn, PCI_COMMAND); + pci_writew(devfn, PCI_COMMAND, ~cmd); + /* So need to jump next device. */ + is_conflict = true; + } + } + } + } + is_conflict = false; + } +} + void pci_setup(void) { uint8_t is_64bar, using_64bar, bar64_relocate = 0; @@ -462,6 +540,9 @@ void pci_setup(void) cmd |= PCI_COMMAND_IO; pci_writew(vga_devfn, PCI_COMMAND, cmd); } + + /* If pci bars conflict with RDM we need to disable this pci device. */ + disable_conflicting_devices(); } /* -- 1.9.1 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |