|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen ARM and rtds real-time scheduler: local_irq_is_enabled() assert
On ven, 2014-09-26 at 16:02 +0200, Leonardo Taccari wrote:
> Good afternoon to the entire Xen community!
>
Hi Leonardo,
> In the previous months I have succesfully installed and run xen-4.4 on a
> cubieboard2 (running Debian GNU/Linux testing armhf).
>
Cool!
> Now I am interested on trying and testing the rtds real-time scheduler
>
Purely out of curiosity, may I ask why? :-)
> I have applied the four patches regarding rtds real-time scheduler
> appeared on xen-devel@ ML with the subject "[PATCH for 4.5 v4 [1-4]/4]"
> (I've also needed to apply a patch that fixes a printf(3) syntax in
> burn_budget() from Ian Campbell[1]).
>
Right. As other said, you should
> [...]
> root@cubieboard:~# /etc/init.d/xencommons start
> Starting /usr/local/sbin/xenstored...
> Setting domain 0 name, domid and JSON config...
> Done setting up Dom0
> Starting xenconsoled...
> Starting QEMU as disk backend for dom0
> /etc/init.d/xencommons: line 114: /usr/local/lib/xen/bin/qemu-system-i386: No
> such file or directory
> root@cubieboard:~# xl sched-rtds
> root@cubieboard:~# xl sched-rtds -d Domain-0 -p 20000 -b 10000
> libxl: error: libxl.c:5481:sched_rtds_domain_set: getting domain sched rtds:
> Invalid argument libxl_domain_sched_params_set failed.
>
This is correct, you can't change the scheduler for just a domain. What
you must do is what you actually try to do below, i.e., move it in a
cpupool the scheduler for which is the one you want.
> root@cubieboard:~# xl cpupool-list
> Name CPUs Sched Active Domain count
> Pool-0 2 credit y 1
> root@cubieboard:~# xl cpupool-cpu-remove Pool-0 1
> root@cubieboard:~# xl cpupool-list
> Name CPUs Sched Active Domain count
> Pool-0 1 credit y 1
> root@cubieboard:~# xl cpupool-create name=\"rt\" sched=\"rtds\"
> Using config file "command line"
> cpupool name: rt
> scheduler: rtds
> number of cpus: 0
>
Mmm... so, you created a cpupool without any cpu assigned to it.
That's legal, as you can see, but then I don't think you should/could
move domain to it.
And in fact, if you look at xen/common/cpupool.c, where
XEN_SYSCTL_CPUPOOL_OP_MOVEDOMAIN is handled, you'll find this:
if ( (c != NULL) && cpumask_weight(c->cpu_valid) )
{
d->cpupool->n_dom--;
ret = sched_move_domain(d, c);
if ( ret )
d->cpupool->n_dom++;
else
c->n_dom++;
}
In your case, I think cpumask_weight(c->cpu_valid) is 0
> (XEN) Initializing RTDS scheduler
> (XEN) WARNING: This is experimental software in development.
> (XEN) Use at your own risk.
> root@cubieboard:~# xl cpupool-cpu-add rt 1
> root@cubieboard:~# cd xen/debian-testing
> root@cubieboard:~/xen/debian-testing# losetup /dev/loop0
> /root/xen/debian-testing/debian-testing.img
> root@cubieboard:~/xen/debian-testing# xl create -c debian-testing.cfg
> root@cubieboard:~# xl list
> Name ID Mem VCPUs State
> Time(s)
> Domain-0 0 128 2 r-----
> 14.3
> debian-testing 1 128 1 ------
> 6.5
> root@cubieboard:~# xl cpupool-list
> Name CPUs Sched Active Domain count
> Pool-0 1 credit y 2
> rt 1 rtds y 0
> root@cubieboard:~# xl cpupool-migrate debian-testing rt
>
I've only tried on x86, but what I see confirms what I saw above:
root@tg03:~# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 1023 24 r----- 20.7
debian.guest.osstest 1 19256 2 -b---- 5.3
root@tg03:~# xl -vvv cpupool-migrate 1 rt
libxl: error: libxl.c:6215:libxl_cpupool_movedomain: Error moving domain to
cpupool
root@tg03:~# xl sched-rtds
Cpupool rt: sched=RTDS
Name ID Period Budget
root@tg03:~# xl sched-credit
Cpupool Pool-0: tslice=30ms ratelimit=1000us
Name ID Weight Cap
Domain-0 0 256 0
debian.guest.osstest 1 256 0
> (XEN) Assertion 'local_irq_is_enabled()' failed, line 137, file spinlock.c
> root@cubieboard:~/xen/debian-testing# (XEN) Xen BUG at spinlock.c:137
> (XEN) CPU1: Unexpected Trap: Undefined Instruction
> (XEN) ----[ Xen-4.5-unstable arm32 debug=y Not tainted ]----
> (XEN) Xen call trace:
> (XEN) [<0024b590>] __bug+0x2c/0x44 (PC)
> (XEN) [<0024b590>] __bug+0x2c/0x44 (LR)
> (XEN) [<00231f1c>] _spin_lock_irq+0x4c/0xdc
> (XEN) [<0022c2c4>] rt_context_saved+0x54/0x1d0
> (XEN) [<0023082c>] context_saved+0x7c/0xa4
> (XEN) [<00250758>] schedule_tail+0x150/0x31c
> (XEN) [<00250e18>] context_switch+0x114/0x128
> (XEN) [<0022d5a4>] schedule+0x8e4/0x940
> (XEN) [<002310c8>] __do_softirq+0xf8/0x118
> (XEN) [<00231190>] do_softirq+0x18/0x28
> (XEN) [<00250ce0>] idle_loop+0x1a4/0x1c8
> (XEN) [<0025d6c0>] start_secondary+0x1a0/0x1bc
> (XEN)
>
Interesting... As I said, the domain shouldn't even be there, so I don't
immediately see the root cause for this.
If you try it with latest staging, let us know how it goes.
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |