[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V2] mem_event: Allow emulating an instruction that caused a page fault
At 15:47 +0200 on 22 Jan (1358869654), Razvan Cojocaru wrote: > >Ok, talking only about writes, we have the destination operand, plus all > >the pagetables (for setting Accessed bits) plus any stacks and TSSes > >needed in delivering faults; something like 32 pages for the full > >double-fault scenario. > > I see, but then, even setting aside Andres' argument that having all > possible events sent to userspace is far from trivial, doing so would > completely cripple the monitored domain speed-wise. Imagine having > userspace having to decide if it allows a write 32 times per one > instruction. Well, the _common_ case is only to have one write. Even rarer cases like needing an Accessed bit or crossing a page will only be two writes. But sending all the events from userspace doesn't necessarily solve the problem. The guest could be overwriting the instruction stream from another vcpu, or by DMA from disk, so that by the time you've returned to Xen the instruction you emulate isn't even the one that faulted. The only properly safe way to allow exactly one exception to your rules is to emulate the instruction in user-space. (Well, that or somehow move your policy into Xen and do the emulation there, but I'm quite strongly opposed to that). > Even with its limitations, this patch at least gives us a shot at > looking at what the domain does with memory - the existing model > (releasing the page completely to allow the write) is not fit for > anything like this at all. If you're just using this to gather statistics about how often a page gets written, you could use sampling; you don't need to see _every_ write. > So far, between the requirements of reasonable demands on the domU, and > satisfactory levels of control provided by the received mem_events, at > least as far as writes go, the patch's done it's job quite nicely. It might be helpful if you could give us a clear description of exactly what problem you're trying to solve. Tim. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |