|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v3 0/2] kexec: Use hypercall_create_continuation to protect KEXEC ops
During testing (using the script below) we found that multiple
invocations of kexec of unload/load are not safe.
This does not exist in classic Xen kernels in which the kexec-tools
did the kexec via Linux kernel syscall (which in turn made the
hypercall), as the Linux code has a mutex_trylock which would
inhibit multiple concurrent calls.
But with the kexec-tools utilizing xc_kexec_* that is no longer
the case and we need to protect against multiple concurrent
invocations.
Please see the patches and review at your convenience!
==== try-crash.pl from bhavesh.davda@xxxxxxxxxx ====
#!/usr/bin/perl -w
use strict;
use warnings;
use threads;
sub threaded_task {
threads->create(sub {
my $thr_id = threads->self->tid;
#print "Starting load thread $thr_id\n";
system("/sbin/kexec -p --command-line=\"placeholder
root=/dev/mapper/nimbula-root ro rhbg console=tty0 console=hvc0 earlyprintk=xen
nomodeset printk.time=1 irqpoll maxcpus=1 nr_cpus=1 reset_devices
cgroup_disable=memory mce=off selinux=0 console=ttyS1,115200n8\"
--initrd=/boot/initrd-4.1.12-61.1.9.el6uek.x86_64kdump.img
/boot/vmlinuz-4.1.12-61.1.9.el6uek.x86_64");
#print "Ending load thread $thr_id\n";
threads->detach(); #End thread.
});
threads->create(sub {
my $thr_id = threads->self->tid;
#print "Starting unload thread $thr_id\n";
system("/sbin/kexec -p -u");
#print "Ending unload thread $thr_id\n";
threads->detach(); #End thread.
});
}
for my $i (0..99)
{
threaded_task();
}
Eric DeVolder (2):
kexec: use hypercall_create_continuation to protect KEXEC ops
kexec: remove spinlock now that all KEXEC hypercall ops are protected
at the top-level
xen/common/kexec.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
--
2.7.4
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |