[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Xen-devel] Cpu pools discussion
- To: xen-devel@xxxxxxxxxxxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
- From: George Dunlap <dunlapg@xxxxxxxxx>
- Date: Mon, 27 Jul 2009 16:20:58 +0100
- Delivery-date: Mon, 27 Jul 2009 08:21:23 -0700
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:date :x-google-sender-auth:message-id:subject:from:to:content-type :content-transfer-encoding; bh=kofZWLteNabC3t3LJ3EUAxWbuVKxIGi9kW7dDguFmTQ=; b=ivIU6s6YHUyO5oZGRcp5INQ5zOfwKnCV/XW+N4Gcqpw5Mh/YJAjrZtOknzvWGUbxP0 pfjfVX7PBiQ78H9jGRGjn3MtPosFUqLqBhTvL1A/zNwWuWpNVd9sYZDz6UmBvSmrvCOo AmC05n7+Eg+KLzGvKeKdSlq8xAjkGguObz0oY=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:date:x-google-sender-auth:message-id:subject :from:to:content-type:content-transfer-encoding; b=lM8PojAiOomjAs7V99527hSn9SUQDoS0xGjluV1qbaEz5Wuws+uVr4Ybcy344kAHMb Yz19vjuu69LS4AvQ0s2ru8QD4mRZyGjORUAsS9hV/P7WcFbC3xIf/hrl5ZGb65Pwk7Hp pA5ZhJY4sXG0BGvsJyiP7e98MMkNW46Ziyq/4=
- List-id: Xen developer discussion <xen-devel.lists.xensource.com>
Keir (and community),
Any thoughts on Jeurgen Gross' patch on cpu pools?
As a reminder, the idea is to allow "pools" of cpus that would have
separate schedulers. Physical cpus and domains can be moved from one
pool to another only by an explicit command. The main purpose Fujitsu
seems to have is to allow a simple machine "partitioning" that is more
robust than using simple affinity masks. Another potential advantage
would be the ability to use different schedulers for different
For my part, it seems like they should be OK. The main thing I don't
like is the ugliness related to continue_hypercall_on_cpu(), described
Jeurgen, could you remind us what were the advantages of pools in the
hypervisor, versus just having
affinity masks (with maybe sugar in the toolstack)?
Re the ugly part of the patch, relating to continue_hypercall_on_cpu():
Domains are assigned to a pool, so
if continue_hypercall_on_cpu() is called for a cpu not in the domain's
pool, you can't just run it normally. Jeurgen's solution (IIRC) was to
pause all domains in the other pool, temporarily move the cpu in
question to the calling domain's pool, finish the hypercall, then move
the cpu in question back to the other pool.
Since there's a lot of antecedents in that, let's take an example:
Two pools; Pool A has cpus 0 and 1, pool B has cpus 2 and 3.
Domain 0 is running in pool A, domain 1 is running in pool B.
Domain 0 calls "continue_hypercall_on_cpu()" for cpu 2.
Cpu 2 is in pool B, so Jeurgen's patch:
* Pauses domain 1
* Moves cpu 2 to pool A
* Finishes the hypercall
* Moves cpu 2 back to pool B
* Unpauses domain 1
That seemed a bit ugly to me, but I'm not familiar enough with the use
cases or the code to know if there's a cleaner solution.
Xen-devel mailing list