[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/8] public / x86: Introduce __HYPERCALL_dm_op...



On 17/01/17 12:42, Jan Beulich wrote:
>
>>>>>>> And I'm not sure we really need to bother considering hypothetical
>>>>>>> 128-bit architectures at this point in time.
>>>>>> Because considering this case will avoid us painting ourselves into a
>>>>>> corner.
>>>>> Why would we consider this case here, when all other part of the
>>>>> public interface don't do so?
>>>> This is asking why we should continue to shoot ourselves in the foot,
>>>> ABI wise, rather than trying to do something better.
>>>>
>>>> And the answer is that I'd prefer that we started fixing the problem,
>>>> rather than making it worse.
>>> Okay, so 128 bit handles then. But wait, we should be prepared for
>>> 256-bit environments to, so 256-bit handles then. But wait, ...
>> Precisely. A fixed bit width doesn't work, and cannot work going
>> forwards.  Using a fixed bitsize will force is to burn a hypercall
>> number every time we want to implement this ABI at a larger bit size.
> With wider machine word width the number space of hypercalls
> widens too, so I would not be worried at all using new hypercall
> numbers, or even wholesale new hypercall number ranges.

I will leave this to the other fork of the thread, but our hypercall
space does extend.  It is currently fixed at an absolute maximum of 4k/32.

>
>>> Or maybe I'm simply not getting what you mean to put in place here.
>> The interface should be in terms of void * (and where appropriate,
>> size_t), from the guests point of view, and is what a plain
>> GUEST_HANDLE() gives you.
> As said - that'll further break 32-bit tool stacks on 64-bit kernels.

dmop is not a tool api.  It can only be issued by a kernel, after the
kernel has audited the internals.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.