[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] arm/vm_event: get/set registers



On Thu, Jul 28, 2016 at 10:16 PM, Razvan Cojocaru
<rcojocaru@xxxxxxxxxxxxxxx> wrote:
> On 07/29/16 00:25, Julien Grall wrote:
>>
>>
>> On 28/07/2016 22:05, Tamas K Lengyel wrote:
>>> On Thu, Jul 28, 2016 at 3:01 PM, Julien Grall <julien.grall@xxxxxxx>
>>> wrote:
>>> That's not how we do it with vm_event. Even on x86 we only selectively
>>> set registers using the VM_EVENT_FLAG_SET_REGISTERS flag (albeit it
>>> not being documented in the header). As for "not exposing them" it's a
>>> waste to declare separate structures for getting and setting. I'll
>>> change my mind about that if Razvan is on the side that we should
>>> start doing that, but I don't think that's the case at the moment.
>>
>> Is there any rationale to only set a subset of the information you
>> retrieved?
>
> The perennial speed optimization, but mainly that setting everything can
> have side-effects (on x86 I remember that at the time I wrote the
> initial patch this had something to do with the control registers - if
> you'd like I can try to follow the code again and try to remember what
> the exact issue was).
>
> My main use-case at the time was to simply set EIP (I needed to be able
> to skip the current instruction if it happened to be deemed to be
> malicious by the introspection engine). I believe it has been assumed at
> the time that setting the GPRS is enough, and that can be extended in
> the future by interested parties.
>

Yeap, it's pretty much the same case here, we just need to be able to
set the address of the next instruction without needing to get and set
all registers. Changing the translation table underneath the OS or
other control registers is just not necessary (even if technically it
would be possible).

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.