|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5:
- Move the optimized 2-task contending code to the generic file to
enable more architectures to use it without code duplication.
- Address some of the style-related comments by PeterZ.
- Allow the use of unfair queue spinlock in a real para-virtualized
execution environment.
- Add para-virtualization support to the qspinlock code by ensuring
that the lock holder and queue head stay alive as much as possible.
v3->v4:
- Remove debugging code and fix a configuration error
- Simplify the qspinlock structure and streamline the code to make it
perform a bit better
- Add an x86 version of asm/qspinlock.h for holding x86 specific
optimization.
- Add an optimized x86 code path for 2 contending tasks to improve
low contention performance.
v2->v3:
- Simplify the code by using numerous mode only without an unfair option.
- Use the latest smp_load_acquire()/smp_store_release() barriers.
- Move the queue spinlock code to kernel/locking.
- Make the use of queue spinlock the default for x86-64 without user
configuration.
- Additional performance tuning.
v1->v2:
- Add some more comments to document what the code does.
- Add a numerous CPU mode to support >= 16K CPUs
- Add a configuration option to allow lock stealing which can further
improve performance in many cases.
- Enable wakeup of queue head CPU at unlock time for non-numerous
CPU mode.
This patch set has 3 different sections:
1) Patches 1-3: Introduces a queue-based spinlock implementation that
can replace the default ticket spinlock without increasing the
size of the spinlock data structure. As a result, critical kernel
data structures that embed spinlock won't increase in size and
breaking data alignments.
2) Patches 4 and 5: Enables the use of unfair queue spinlock in a
real para-virtualized execution environment. This can resolve
some of the locking related performance issues due to the fact
that the next CPU to get the lock may have been scheduled out
for a period of time.
3) Patches 6-8: Enable qspinlock para-virtualization support by making
sure that the lock holder and the queue head stay alive as long as
possible.
Patches 1-3 are fully tested and ready for production. Patches
4-8, on the other hands, are not fully tested. They have undergone
compilation tests with various combinations of kernel config settings,
boot-up tests in a bare-metal as well as simple performance test on a
KVM guest. Further tests and performance characterization are still
needed to be done. So comments on them are welcomed. Suggestions or
recommendations on how to add PV support in the Xen environment are
also needed.
The queue spinlock has slightly better performance than the ticket
spinlock in uncontended case. Its performance can be much better
with moderate to heavy contention. This patch has the potential of
improving the performance of all the workloads that have moderate to
heavy spinlock contention.
The queue spinlock is especially suitable for NUMA machines with at
least 2 sockets, though noticeable performance benefit probably won't
show up in machines with less than 4 sockets.
The purpose of this patch set is not to solve any particular spinlock
contention problems. Those need to be solved by refactoring the code
to make more efficient use of the lock or finer granularity ones. The
main purpose is to make the lock contention problems more tolerable
until someone can spend the time and effort to fix them.
The performance data in bare metal were discussed in the patch
descriptions. For PV support, some simple performance test was
performed on a 2-node 20-CPU KVM guest running 3.14-rc4 kernel in a
larger 8-node machine. The disk workload of the AIM7 benchmark was
run on both ext4 and xfs RAM disks at 2000 users. The JPM (jobs/minute)
data of the test run were:
kernel XFS FS %change ext4 FS %change
------ ------ ------- ------- -------
PV ticketlock (baseline) 2390438 - 1366743 -
qspinlock 1775148 -26% 1336303 -2.2%
PV qspinlock 2264151 -5.3% 1351351 -1.1%
unfair qspinlock 2404810 +0.6% 1612903 +18%
unfair + PV qspinlock 2419355 +1.2% 1612903 +18%
The XFS test had moderate spinlock contention of 1.6% whereas the ext4
test had heavy spinlock contention of 15.4% as reported by perf. It
seems like the PV qspinlock support still has room for improvement
compared with the current PV ticketlock implementation.
Waiman Long (8):
qspinlock: Introducing a 4-byte queue spinlock implementation
qspinlock, x86: Enable x86-64 to use queue spinlock
qspinlock, x86: Add x86 specific optimization for 2 contending tasks
pvqspinlock, x86: Allow unfair spinlock in a real PV environment
pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
pvqspinlock, x86: Add qspinlock para-virtualization support
pvqspinlock, x86: Enable KVM to use qspinlock's PV support
arch/x86/Kconfig | 12 +
arch/x86/include/asm/paravirt.h | 9 +-
arch/x86/include/asm/paravirt_types.h | 12 +
arch/x86/include/asm/pvqspinlock.h | 176 ++++++++++
arch/x86/include/asm/qspinlock.h | 133 +++++++
arch/x86/include/asm/spinlock.h | 9 +-
arch/x86/include/asm/spinlock_types.h | 4 +
arch/x86/kernel/Makefile | 1 +
arch/x86/kernel/kvm.c | 73 ++++-
arch/x86/kernel/paravirt-spinlocks.c | 15 +-
arch/x86/xen/spinlock.c | 2 +-
include/asm-generic/qspinlock.h | 122 +++++++
include/asm-generic/qspinlock_types.h | 61 ++++
kernel/Kconfig.locks | 7 +
kernel/locking/Makefile | 1 +
kernel/locking/qspinlock.c | 610 +++++++++++++++++++++++++++++++++
16 files changed, 1239 insertions(+), 8 deletions(-)
create mode 100644 arch/x86/include/asm/pvqspinlock.h
create mode 100644 arch/x86/include/asm/qspinlock.h
create mode 100644 include/asm-generic/qspinlock.h
create mode 100644 include/asm-generic/qspinlock_types.h
create mode 100644 kernel/locking/qspinlock.c
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |