[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Blocking CR and MSR writes via mem_access?
On 10/03/2014 07:22 PM, Tamas K Lengyel wrote: > > > On Fri, Oct 3, 2014 at 2:37 PM, Andrew Cooper <andrew.cooper3@xxxxxxxxxx > <mailto:andrew.cooper3@xxxxxxxxxx>> wrote: > > On 03/10/14 13:32, Tamas K Lengyel wrote: >> >> On Thu, Oct 2, 2014 at 12:49 PM, Razvan Cojocaru >> <rcojocaru@xxxxxxxxxxxxxxx <mailto:rcojocaru@xxxxxxxxxxxxxxx>> wrote: >> >> Hello, >> >> Currently hvm_memory_event_cr3() and the other >> hvm_memory_event_*() >> functions in hvm.c can pause the VCPU and send a mem_event >> with the new >> value of the respective register, but especially in the case of CR >> events (as opposed to MSR events), this is done _after_ the >> value is set >> (please see hvm_set_cr3() in hvm.c). >> >> It would be interesting from a memory introspection >> application's point >> of view to be able to receive a mem_event _before_ the value >> is set, and >> important to be able to veto the change. >> >> A few questions: >> >> 1. Would it be acceptable to move the CR3 event sending code >> so that a >> mem_access client would receive the event _before_ the write takes >> place? Is this likely to break other mem_event clients that >> might rely >> on the event being received _after_ the value has been set? >> >> >> Yes, it would break existing applications. >> >> >> 2. I see that mem_event responses from all these cases (EPT >> violations, >> CR, MSR) are handled in p2m.c's p2m_mem_access_resume() (seems >> to be >> confirmed by testing). Is this correct? >> >> 3. What would be the sanest, most elegant way to modify Xen so >> that >> after a mem_event reply is being received for one of these >> cases (CR, >> MSR), the write will then be rejected? I'm asking because, as >> always, >> ideally this would also benefit other Xen users and an elegant >> patch is >> always more likely to find its way into mainline than a quick >> hack. >> >> >> You can already block such writes with the existing post-write >> event delivery. If you are continuously watching for writes, you >> know what the previous value was (for CR events it is actually >> delivered to you by Xen as well as per my recent patch). If you >> don't like a particular new value that was set, just reset it to >> the value you had / want. >> >> Tamas > > That doesn't work if you join an event listener between the previous > MSR write and one you wish to veto. > > > Yes, that's correct. That's why I said it works if you continuously > monitor for writes. I think that's a reasonable assumption. We could > also make the MSR write events deliver the previous value as well > similar to how the CR events do it. Anyway, AFAIU the hardware traps > always happen before the write so technically both approaches are > pre-write from the guest's perspective. I've actually been looking at this for a bit, and while it's true that it might work for CR events, it's less clear how that would work for MSRs. The CR part might be done in the following fashion: vcpu_guest_context_any_t ctx; if (xc_vcpu_getcontext(xch, domain, req.vcpu_id, &ctx) == 0) { ctx.c.ctrlreg[crNumber] = req.gla; /* old value */ xc_vcpu_setcontext(xch, domain, req.vcpu_id, &ctx); } However, only a very small number of MSRs are being offered for manipulation from dom0 via libxc (xen/include/public/arch-x86/hvm/save.h): struct hvm_hw_cpu { /* ... */ /* msr content saved/restored. */ uint64_t msr_flags; uint64_t msr_lstar; uint64_t msr_star; uint64_t msr_cstar; uint64_t msr_syscall_mask; uint64_t msr_efer; uint64_t msr_tsc_aux; /* ... */ }; So this would either require expanding the vcpu_guest_context_any_t information, or adding a new hypercall that sets MSRs (which would wrap / ultimately end up calling msr_write_intercept(msr, value)). Am I missing something obvious here, or were you explictly only talking about CRs? Thanks, Razvan Cojocaru _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |