[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PML (Page Modification Logging) design for Xen



>>> On 11.02.15 at 17:33, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 11/02/15 13:13, Jan Beulich wrote:
>>>>> On 11.02.15 at 12:52, <andrew.cooper3@xxxxxxxxxx> wrote:
>>> On 11/02/15 08:28, Kai Huang wrote:
>>>> We handle above two cases by flushing PML buffer at the beginning of
>>>> all VMEXITs. This solves the first case above, and it also solves the
>>>> second case, as prior to paging_log_dirty_op, domain_pause is called,
>>>> which kicks vcpus (that are in guest mode) out of guest mode via
>>>> sending IPI, which cause VMEXIT, to them.
>>>>
>>>> This also makes log-dirty radix tree more updated as PML buffer is
>>>> flushed on basis of all VMEXITs but not only PML buffer full VMEXIT.
>>> My gut feeling is that this is substantial overhead on a common path,
>>> but this largely depends on how the dirty bits can be cleared efficiently.
>> I agree on the overhead part, but I don't see what relation this has
>> to the dirty bit clearing - a PML buffer flush doesn't involve any
>> alterations of D bits.
> 
> I admit that I was off-by-one level when considering the
> misconfiguration overhead.  It would be inefficient (but not unsafe as
> far as I can tell) to clear all D bits at once; the PML could end up
> with repeated gfns in it, or different vcpus could end up with the same
> gfn, depending on the exact access pattern, which will add to the flush
> overhead.

Why would that be? A misconfiguration exit means no access to
a given range was possible at all before, i.e. all subordinate pages
would have the D bit clear if they were reachable. What you
describe would - afaict - be a problem only if we didn't go over the
whole guest address space at once.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.