[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC on deprivileged x86 hypervisor device models



>>> On 20.07.15 at 15:43, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 17/07/15 16:38, Jan Beulich wrote:
>>>>> On 17.07.15 at 17:19, <Ben.Catterall@xxxxxxxxxx> wrote:
>>> On 17/07/15 15:20, Jan Beulich wrote:
>>>> If not, then method 2 would seem quite a bit less troublesome than
>>>> method 1, yet method 3 would (even if more involved in terms of
>>>> changes to be done) perhaps result in the most elegant result.
>>> I agree that method three is more elegant. If both you and Andrew are ok 
>>> with going in a per-vcpu stack direction for Xen in general then I'll 
>>> write a per-vcpu patch first and then do another patch which adds the 
>>> ring 3 feature on-top of that.
>> Actually improvements to common/wait.c have also been thought of
>> long ago already, for whenever per-vCPU stacks would be available.
>> The few users of these interfaces never resulted in this becoming
>> important enough a work item, unfortunately.
> 
> While per-vcpu stacks would be nice, there are a number of challenges to
> be overcome before they can sensibly be used.  Off the top of my head,
> per-vcpu state living at the top of the stack, hard coded stack
> addresses in the emulation stubs, IST stacks moving back onto the
> primary stack and splitting out a separate irq stack if the ring0 stack
> wants to be left with partial state on.
> 
> Therefore I recommend method 2 to reuse the existing kudge we have. 
> That will allow you to actually investigate some of the depriv aspects
> in the time you have.

Yeah, I too meant to point out that this per-pCPU stack work is
likely too much to be done as a preparatory thing here; I simply
forgot.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.