[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PML (Page Modification Logging) design for Xen

On Mon, Feb 16, 2015 at 7:44 PM, Andrew Cooper
<andrew.cooper3@xxxxxxxxxx> wrote:
> On 14/02/15 03:01, Kai Huang wrote:
>>>> This will only function correctly if superpage shattering is used.
>>>> As soon as a superpage D bit transitions from 0 to 1, the gfn is logged
>>>> and the guest can make further updated in the same frame without further
>>>> log entries being recorded. The PML flush code *must* assume that every
>>>> other gfn mapped by the superpage is dirty, or memory corruption could
>>>> occur when resuming on the far side of the migration.
>>> To me the superpage has been split before its D bit changes from 0 to
>>> 1, as in my understanding EPT violation happens before setting D-bit,
>>> and it's not possible to log gfn before superpage is split. Therefore
>>> PML doesn't need to assume every other gfn in superpage range is
>>> dirty, as they are already 4K pages now with D-bit clear and can be
>>> logged by PML.  Does this sound reasonable?
> Agreed - I was describing the non-shattering case.
>>>>>> It is also not conducive to minimising the data transmitted in the 
>>>>>> migration
>>>>>> stream.
>>>>> Yes PML itself is unlikely to minimize data transmitted in the
>>>>> migration stream, as how much dirty pages will be  transmitted is
>>>>> totally up to guest. But it reduces EPT violation of 4K page write
>>>>> protection, so theoretically PML can reduce CPU cycles in hypervisor
>>>>> context and more cycles can be used in guest mode, therefore it's
>>>>> reasonable to expect guest will have better performance.
>>>> "performance" is a huge amorphous blob of niceness that wants to be
>>>> achieved.  You must be more specific than that when describing
>>>> "performance" as "better".
>>> Yes I will gather some benchmark results prior to sending out the
>>> patch to review. Actually it will be helpful if you or other guys can
>>> provide some suggestion relating to how to measure the performance,
>>> such as which benchmarks should be run.
> At a start, a simple count of vmexits using xentrace would be
> interesting to see.

Will do.

> Can I highly recommend testing live migration using a memtest vm?  It
> was highly useful to me when developing migration v2 and complains very
> loudly if some if its memory gets left behind.

Sure. Thanks for suggestion.

>>>>> Why would PML interact with HAP vram tracking poorly?
>>>> I was referring to the shattering aspect, rather than PML itself.
>>>> Shattering all superpages would be overkill to just track vram, which
>>>> only needs to cover a small region.
>> To me looks currently tracking vram (HAP) shatters all superpages,
>> instead only superpages in vram range would be. Am I misunderstanding
>> here?
> You are completely correct.
> Having just re-reviewed the HAP code, superpages are fully shattered as
> soon as logdirty mode is touched, which realistically means
> unconditionally, given that Qemu will always track guest VRAM.  (So much
> for the toolstack trying to optimise the guest by building memory using
> superpages; Qemu goes and causes Xen extra work by shattering them all.)
> This means that PML needing superpage shattering is no different to the
> existing code, which means that there are no extra overheads incurred as
> a direct result of PML.


> ~Andrew


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.