[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Blocking CR and MSR writes via mem_access?



On 10/03/14 15:37, Andrew Cooper wrote:
> On 03/10/14 13:32, Tamas K Lengyel wrote:
>>
>> On Thu, Oct 2, 2014 at 12:49 PM, Razvan Cojocaru
>> <rcojocaru@xxxxxxxxxxxxxxx <mailto:rcojocaru@xxxxxxxxxxxxxxx>> wrote:
>>
>>     Hello,
>>
>>     Currently hvm_memory_event_cr3() and the other hvm_memory_event_*()
>>     functions in hvm.c can pause the VCPU and send a mem_event with
>>     the new
>>     value of the respective register, but especially in the case of CR
>>     events (as opposed to MSR events), this is done _after_ the value
>>     is set
>>     (please see hvm_set_cr3() in hvm.c).
>>
>>     It would be interesting from a memory introspection application's
>>     point
>>     of view to be able to receive a mem_event _before_ the value is
>>     set, and
>>     important to be able to veto the change.
>>
>>     A few questions:
>>
>>     1. Would it be acceptable to move the CR3 event sending code so that a
>>     mem_access client would receive the event _before_ the write takes
>>     place? Is this likely to break other mem_event clients that might rely
>>     on the event being received _after_ the value has been set?
>>
>>  
>> Yes, it would break existing applications.
>>  
>>
>>     2. I see that mem_event responses from all these cases (EPT
>>     violations,
>>     CR, MSR) are handled in p2m.c's p2m_mem_access_resume() (seems to be
>>     confirmed by testing). Is this correct?
>>
>>     3. What would be the sanest, most elegant way to modify Xen so that
>>     after a mem_event reply is being received for one of these cases (CR,
>>     MSR), the write will then be rejected? I'm asking because, as always,
>>     ideally this would also benefit other Xen users and an elegant
>>     patch is
>>     always more likely to find its way into mainline than a quick hack.
>>
>>
>> You can already block such writes with the existing post-write event
>> delivery. If you are continuously watching for writes, you know what
>> the previous value was (for CR events it is actually delivered to you
>> by Xen as well as per my recent patch). If you don't like a particular
>> new value that was set, just reset it to the value you had / want.
>>
>> Tamas
> 
> That doesn't work if you join an event listener between the previous MSR
> write and one you wish to veto.
> 
> Having a "pre-write" event hook which the listener can register for
> (instead of the post-write hook) sounds like a plausible plan, where the
> result of the event can be Yes/No/"Do this in stead".

The way I've been thinking about that was to add new mem_event flags
(for example, MEM_EVENT_FLAG_SET_MSR), which could be set by the dom0
monitoring application if the MSR write is allowed to happen, then in
p2m_mem_access_resume() check for the flag and set some per-VCPU data to
signal that a MSR write should happen, for this MSR and that value.

A bool_t parameter could be added to hvm_msr_write_intercept(), let's
call it "mem_event", and if mem_event is true (and mem_access is
enabled, MSR events are requested, etc.), return X86EMUL_OKAY right
after the hvm_memory_event_msr(msr, msr_content) call (i.e. send out the
MSR mem_event and do nothing else). Calling it with mem_event == 1 would
be the default for all the code calling hvm_msr_write_intercept() now.

Then, somewhere (where?) similar to vmx_vmenter_helper() in purpose,
check the data and, if necessary, call hvm_msr_write_intercept() with
mem_event == false, which would trigger the actual write (with _no_ MSR
mem_event sent).

Would this be likely to blow up anything? If not, could you recommend a
candidate function to place the MSR setting call? Can't do it in
p2m_mem_access_resume(), since "current" is not right there.


Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.