|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/5] xen/vm_access: Support for memory-content hiding
>>> On 09.05.15 at 08:55, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
> On 05/09/2015 02:34 AM, Tamas K Lengyel wrote:
>>>>> @@ -193,6 +200,11 @@ struct vm_event_xsetbv {
>>>>> >>> uint64_t value;
>>>>> >>> };
>>>>> >>>
>>>>> >>> +struct vm_event_emul_read_data {
>>>>> >>> + uint32_t size;
>>>>> >>> + uint8_t data[164];
>>>> >>
>>>> >> This number needs an explanation.
>>> >
>>> > It's less than the size of the x86_regs and enough for all the cases
>>> > we've tested so far...
>>> >
>>> >
>>> > Thanks,
>>> > Razvan
>> I feel like 164 bytes is really wasteful for all vm_events considering
>> this would be useful only in a very specific subset of cases. Not sure
>> what would be the right way to get around that.. Maybe having another
>> hypercall (potentionally under memop?) place that buffer somewhere
>> where Xen can access it before the vm_event response is processed?
>> That would require two hypercalls to be issued by the monitoring
>> domain, one to place the buffer and one for the event channel
>> notification being sent to Xen to that the response is placed on the
>> ring, but it might save space on the ring buffer for all other
>> cases/users.
>
> How is it wasteful? Those bytes are in a union with the x86 registers
> that are already in each vm_event request and response, and the size of
> that buffer is smaller than the size of the x86 registers struct.
As said elsewhere already (but just to clarify things here too) -
the wastefulness isn't in the event structure, but in the addition
of a field of this type to struct arch_vcpu.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |