[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2] x86/PCI: Prefer MMIO over PIO on VMware hypervisor
On Tue, Sep 06, 2022 at 12:38:37PM +0530, Ajay Kaher wrote: > During boot-time there are many PCI config reads, these could be performed > either using Port IO instructions (PIO) or memory mapped I/O (MMIO). > > PIO are less efficient than MMIO, they require twice as many PCI accesses > and PIO instructions are serializing. As a result, MMIO should be preferred > when possible over PIO. > > Virtual Machine test result using VMware hypervisor > 1 hundred thousand reads using raw_pci_read() took: > PIO: 12.809 seconds > MMIO: 8.517 seconds (~33.5% faster then PIO) > > Currently, when these reads are performed by a virtual machine, they all > cause a VM-exit, and therefore each one of them induces a considerable > overhead. > > This overhead can be further improved, by mapping MMIO region of virtual > machine to memory area that holds the values that the “emulated hardware” > is supposed to return. The memory region is mapped as "read-only” in the > NPT/EPT, so reads from these regions would be treated as regular memory > reads. Writes would still be trapped and emulated by the hypervisor. > > Virtual Machine test result with above changes in VMware hypervisor > 1 hundred thousand read using raw_pci_read() took: > PIO: 12.809 seconds > MMIO: 0.010 seconds > > This helps to reduce virtual machine PCI scan and initialization time by > ~65%. In our case it reduced to ~18 mSec from ~55 mSec. > > MMIO is also faster than PIO on bare-metal systems, but due to some bugs > with legacy hardware and the smaller gains on bare-metal, it seems prudent > not to change bare-metal behavior. > > Signed-off-by: Ajay Kaher <akaher@xxxxxxxxxx> The subject line should be fixed -- you're changing the behaviour for all hypervisors, not just VMWare. I almost skipped this because of the subject line. Thanks, Wei.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |