[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Mem_event API and MEM_EVENT_REASON_SINGLESTEP
At 18:20 +0200 on 29 Nov (1354213227), Razvan Cojocaru wrote: > Hello, thank you for the quick reply! > >That might work for single-vcpu guests. On multi-vcpu, you'd have to > >pause the whole VM, unprotect the page, single-step the one vcpu that > >trapped, re-protect the page, and unpause the VM. That might be > >unacceptably slow. > > Ah, the simple fact that the words "unacceptably slow" could be employed > to describe the situation makes this approach very unlikely to fit my > purposes. > > I'm still interested in how the debugger API works, though. Maybe, just > maybe, it'll just be acceptably slow. :) :) I think it will depend entirely on how much memory you protect and what the guest is doing with it. It should be possible to make the multi-vcpu case of single-stepping work better by having per-cpu EPT tables, if you're up for hacking away at Xen. That way you wouldn't have to pause the other vcpus while you made the memory writeable. > >You could try: > > - pause the domain > > - copy out the contents of the page > > - use XENMEM_decrease_reservation to remove the page from the guest > > - unpause the domain > > > >Then all accesses to that page will get emulated by Xen and forwarded to > >qemu, just like happens for emulated MMIO devices. In qemu, you can > >emulate the read or write access, and do anything else you like at the > >same time. > > > >That won't work for memory that's accessed in non-trivial ways > >(e.g. used for pagetables or descriptor tables) or using instructions > >that are unsupported/buggy in Xen's instruction emulator. > > Well, I need it to be able to work on _all_ memory, however accessed. Ah, in that case this won't work for you. I think the single-step may be your only option. Tim. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |