|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH v1 00/16] xen: sched: implement core-scheduling
On 12/10/2018 09:49, Dario Faggioli wrote:
> On Fri, 2018-10-12 at 07:15 +0200, Juergen Gross wrote:
>> On 11/10/2018 19:37, Dario Faggioli wrote:
>>>
>>> So, for example:
>>> - domain A has vCore0 and vCore1
>>> - each vCore has 2 threads ({vCore0.0, vCore0.1} and
>>> {vCore1.0, vCore1.1})
>>> - we're on a 2-way SMT host
>>> - vCore1 is running on physical core 3 on the host
>>> - more specifically, vCore1.0 is currently executing on thread 0 of
>>> physical core 3 of the host, and vCore1.1 is currently executing
>>> on
>>> thread 1 of core 3 of the host
>>> - say that both vCore1.0 and vCore1.1 are in guest context
>>>
>>> Now:
>>> * vCore1.0 blocks. What happens?
>>
>> It is going to vBlocked (the physical thread is sitting in the
>> hypervisor waiting for either a (core-)scheduling event or for
>> unblocking vCore1.0). vCore1.1 keeps running. Or, if vCore1.1
>> is already vIdle/vBlocked, vCore1 is switching to blocked and the
>> scheduler is looking for another vCore to schedule on the physical
>> core.
>>
> Ok. And then we'll have one thread in guest context, and one thread in
> Xen (albeit, idle, in this case). In these other cases...
>
>>> * vCore1.0 makes an hypercall. What happens?
>>
>> Same as today. The hypercall is being executed.
>>
>>> * vCore1.0 VMEXITs. What happens?
>>
>> Same as today. The VMEXIT is handled.
>>
> ... we have one thread in guest context, and one thread in Xen, and the
> one in Xen is not just staying idle, it's doing hypercalls and VMEXIT
> handling.
>
>> In case you referring to a potential rendezvous for e.g. L1TF
>> mitigation: this would be handled scheduler agnostic.
>>
> Yes, that was what I was thinking to. I.e., in order to be able to use
> core-scheduling as a _fully_effective_ mitigation for stuff like L1TF,
> we'd need something like that.
>
> In fact, core-scheduling "per-se" mitigates leaks among guests, but if
> we want to fully avoid for two threads to ever be in different security
> contexts (like one in guest and one in Xen, to prevent Xen data leaking
> to a guest), we do need some kind of synchronized Xen enters/exits,
> AFAIUI.
>
> What I'm trying to understand right now, is whether implementing things
> in this way you're proposing, would help achieving that. And what I've
> understood so far is that, no it doesn't.
This aspect will need about the same effort in both solutions.
Maybe my proposal would make it easier to decide whether such a
rendezvous is needed, as there would be only one instance to ask
(schedule.c) instead of multiple instances (sched_*.c).
>
> The main difference between the two approaches would be that we
> implement it once in schedule.c, for all schedulers. But this, I see it
> as something having both up and down sides (yeah, like everything on
> Earth, I know! :-P). More on this later.
>
>>> All in all, I like the idea, because it is about introducing nice
>>> abstractions, it is general, etc., but it looks like a major rework
>>> of
>>> the scheduler.
>>
>> Correct. Finally something to do :-p
>>
> Indeed! :-)
>
>>> Note that, while this series which tries to implement core-
>>> scheduling
>>> for Credit1 is rather long and messy, doing the same (and with a
>>> similar approach) for Credit2 is a lot easier and nicer. I have it
>>> almost ready, and will send it soon.
>>
>> Okay, but would it keep vThreads of the same vCore let always running
>> together on the same physical core?
>>
> It doesn't right now, as we don't have a way to expose such information
> to the guest, yet. And since without such a mechanism, the guest can't
> take advantage of something like this (neither from a performance nor
> from a vuln. mitigation point of view), I kept that out.
>
> But I certainly can see about making it do so (I was already planning
> to).
>
>>> Right. But again, in Credit2, I've been able to implement socket-
>>> wise
>>> coscheduling with this approach (I mean, an approach similar to the
>>> one
>>> in this series, but adapted to Credit2).
>>
>> And then there still is sched_rt.c
>>
> Ok, so I think this is the main benefit of this approach. We do the
> thing once, and all schedulers get core-scheduling (or whatever
> granularity of group scheduling we implement/allow).
>
> But how easy it is to opt out, if one doesn't want it? E.g., in the
> context of L1TF, what if I'm not affected, and hence am not interested
> in core-scheduling? What if I want a cpupool with core-scheduling and
> one without?
>
> I may be wrong, but out of the top of my head, but it seems to me that
> doing things in schedule.c makes this a lot harder, if possible at all.
Why? This would be a per-cpupool setting, so the scheduling granularity
would live in struct scheduler.
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |