|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH for 4.7 4/4] xen: adopt .deinit_pdata and improve timer handling
On Tue, May 3, 2016 at 5:46 PM, Dario Faggioli
<dario.faggioli@xxxxxxxxxx> wrote:
>
> The scheduling hooks API is now used properly, and no
> initialization or de-initialization happen in
> alloc/free_pdata any longer.
>
> In fact, just like it is for Credit2, there is no real
> need for implementing alloc_pdata and free_pdata.
>
> This also made it possible to improve the replenishment
> timer handling logic, such that now the timer is always
> kept on one of the pCPU of the scheduler it's servicing.
> Before this commit, in fact, even if the pCPU where the
> timer happened to be initialized at creation time was
> moved to another cpupool, the timer stayed there,
> potentially inferfearing with the new scheduler of the
> pCPU itself.
>
> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
> --
Reviewed-and-Tested-by: Meng Xu <mengxu@xxxxxxxxxxxxx>
I do have a minor comment about the patch, although it is not
important at all and it is not really about this patch...
> @@ -614,7 +612,8 @@ rt_deinit(struct scheduler *ops)
> {
> struct rt_private *prv = rt_priv(ops);
>
> - kill_timer(prv->repl_timer);
> + ASSERT(prv->repl_timer->status == TIMER_STATUS_invalid ||
> + prv->repl_timer->status == TIMER_STATUS_killed);
I found in xen/timer.h, the comment after the definition of the
TIMER_STATUS_invalid is
#define TIMER_STATUS_invalid 0 /* Should never see
this. */
This comment is a little contrary to how the status is used here.
Actually, what does it exactly means by "Should never see this"?
This _invalid status is used in timer.h and it is the status when a
timer is initialized by init_timer().
So I'm thinking maybe this comment can be better improved to avoid confusion?
Anyway, this is just a comment and should not be a blocker, IMO. I
just want to raise it up since I saw it... :-)
===About the testing I did===
---Below is how I tested it---
I booted up two vcpus, created one cpupool for each type of
schedulers, and migrated them around.
The scripts to run the test cases can be found at
https://github.com/PennPanda/scripts/tree/master/xen/xen-test
---Below is the testing scenarios---
echo "start test case 1..."
xl cpupool-list
xl cpupool-destroy cpupool-credit
xl cpupool-destroy cpupool-credit2
xl cpupool-destroy cpupool-rtds
xl cpupool-create ${cpupool_credit_file}
xl cpupool-create ${cpupool_credit2_file}
xl cpupool-create ${cpupool_rtds_file}
# Add cpus to each cpupool
echo "Add CPUs to each cpupool"
for ((i=0;i<5; i+=1));do
xl cpupool-cpu-remove Pool-0 ${i}
done
echo "xl cpupool-cpu-add cpupool-credit 0"
xl cpupool-cpu-add cpupool-credit 0
echo "xl cpupool-cpu-add cpupool-credit2 1,2"
xl cpupool-cpu-add cpupool-credit2 1
xl cpupool-cpu-add cpupool-credit2 2
echo "xl cpupool-cpu-add cpupool-rtds 3,4"
xl cpupool-cpu-add cpupool-rtds 3
xl cpupool-cpu-add cpupool-rtds 4
xl cpupool-list -c
xl cpupool-list
# Migrate vm1 among cpupools
echo "Migrate ${vm1_name} among cpupools"
xl cpupool-migrate ${vm1_name} cpupool-rtds
xl cpupool-migrate ${vm1_name} cpupool-credit2
xl cpupool-migrate ${vm1_name} cpupool-rtds
xl cpupool-migrate ${vm1_name} cpupool-credit
xl cpupool-migrate ${vm1_name} cpupool-rtds
Thank you very much and best regards,
Meng
--
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |