[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v6 0/2] memory-hotplug: add automatic onlining policy for the newly added memory
Changes since v5: Patch 1: - Mention possible failures during automatic onlining in memory-hotplug.txt [David Rientjes] - Add Daniel's Reviewed-by: (hope it stands) Patch2: - Change the last 'domU' -> 'target domain' in Kconfig [Daniel Kiper] - Add Daniel's Reviewed-by: - Add David's Acked-by: Original description: Currently, all newly added memory blocks remain in 'offline' state unless someone onlines them, some linux distributions carry special udev rules like: SUBSYSTEM=="memory", ACTION=="add", ATTR{state}=="offline", ATTR{state}="online" to make this happen automatically. This is not a great solution for virtual machines where memory hotplug is being used to address high memory pressure situations as such onlining is slow and a userspace process doing this (udev) has a chance of being killed by the OOM killer as it will probably require to allocate some memory. Introduce default policy for the newly added memory blocks in /sys/devices/system/memory/auto_online_blocks file with two possible values: "offline" which preserves the current behavior and "online" which causes all newly added memory blocks to go online as soon as they're added. The default is "offline". Vitaly Kuznetsov (2): memory-hotplug: add automatic onlining policy for the newly added memory xen_balloon: support memory auto onlining policy Documentation/memory-hotplug.txt | 23 ++++++++++++++++++++--- drivers/base/memory.c | 34 +++++++++++++++++++++++++++++++++- drivers/xen/Kconfig | 23 +++++++++++++++-------- drivers/xen/balloon.c | 11 ++++++++++- include/linux/memory.h | 3 +++ include/linux/memory_hotplug.h | 4 +++- mm/memory_hotplug.c | 17 +++++++++++++++-- 7 files changed, 99 insertions(+), 16 deletions(-) -- 2.5.0 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |