|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC V11 15/18] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor
On 07/25/2013 03:08 PM, Raghavendra K T wrote: On 07/25/2013 02:45 PM, Gleb Natapov wrote:On Thu, Jul 25, 2013 at 02:47:37PM +0530, Raghavendra K T wrote:On 07/24/2013 06:06 PM, Raghavendra K T wrote:On 07/24/2013 05:36 PM, Gleb Natapov wrote:On Wed, Jul 24, 2013 at 05:30:20PM +0530, Raghavendra K T wrote:On 07/24/2013 04:09 PM, Gleb Natapov wrote:On Wed, Jul 24, 2013 at 03:15:50PM +0530, Raghavendra K T wrote:On 07/23/2013 08:37 PM, Gleb Natapov wrote:On Mon, Jul 22, 2013 at 11:50:16AM +0530, Raghavendra K T wrote:+static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)[...]+ + /* + * halt until it's our turn and kicked. Note that we do safe halt + * for irq enabled case to avoid hang when lock info is overwritten + * in irq spinlock slowpath and no spurious interrupt occur to save us. + */ + if (arch_irqs_disabled_flags(flags)) + halt(); + else + safe_halt(); + +out:So here now interrupts can be either disabled or enabled. Previous version disabled interrupts here, so are we sure it is safe to have them enabled at this point? I do not see any problem yet, will keep thinking.If we enable interrupt here, then+ cpumask_clear_cpu(cpu, &waiting_cpus);and if we start serving lock for an interrupt that came here, cpumask clear and w->lock=null may not happen atomically. if irq spinlock does not take slow path we would have non null value for lock, but with no information in waitingcpu. I am still thinking what would be problem with that.Exactly, for kicker waiting_cpus and w->lock updates are non atomic anyway. yes. Thanks.. that did the trick. I did like below in unknown_nmi_error(): if (cpumask_test_cpu(smp_processor_id(), &waiting_cpus)) return; But I believe you asked NMI method only for experimental purpose to check the upperbound. because as I doubted above, for spurious NMI (i.e. when unlocker kicks when waiter already got the lock), we would still hit unknown NMI error. I had hit spurious NMI over 1656 times over entire benchmark run. along withINFO: NMI handler (arch_trigger_all_cpu_backtrace_handler) took too long to run: 24.886 msecs etc...
(and we cannot get away with that too because it means we bypass the
unknown NMI error even in genuine cases too)
Here was the result for the my dbench test( 32 core machine with 32
vcpu guest HT off)
---------- % improvement --------------
pvspinlock pvspin_ipi pvpsin_nmi
dbench_1x 0.9016 0.7442 0.7522
dbench_2x 14.7513 18.0164 15.9421
dbench_3x 14.7571 17.0793 13.3572
dbench_4x 6.3625 8.7897 5.3800
So I am seeing over 2-4% improvement with IPI method.
Gleb,
do you think the current series looks good to you? [one patch I
have resent with in_nmi() check] or do you think I have to respin the
series with IPI method etc. or is there any concerns that I have to
address. Please let me know..
PS: [Sorry for the late reply, was quickly checking whether unfair lock
with lockowner is better. it did not prove to be though. and so far
all the results are favoring this series.]
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |