[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PML (Page Modification Logging) design for Xen

>>> On 12.02.15 at 03:49, <kai.huang@xxxxxxxxxxxxxxx> wrote:
> On 02/11/2015 09:06 PM, Jan Beulich wrote:
>>>>> On 11.02.15 at 09:28, <kai.huang@xxxxxxxxxxxxxxx> wrote:
>>> - PML buffer flush
>>> There are two places we need to flush PML buffer. The first place is PML
>>> buffer full VMEXIT handler (apparently), and the second place is in
>>> paging_log_dirty_op (either peek or clean), as vcpus are running
>>> asynchronously along with paging_log_dirty_op is called from userspace via
>>> hypercall, and it's possible there are dirty GPAs logged in vcpus' PML
>>> buffers but not full. Therefore we'd better to flush all vcpus' PML buffers
>>> before reporting dirty GPAs to userspace.
>>> We handle above two cases by flushing PML buffer at the beginning of all
>>> VMEXITs. This solves the first case above, and it also solves the second
>>> case, as prior to paging_log_dirty_op, domain_pause is called, which kicks
>>> vcpus (that are in guest mode) out of guest mode via sending IPI, which 
>>> cause
>>> VMEXIT, to them.
>>> This also makes log-dirty radix tree more updated as PML buffer is flushed
>>> on basis of all VMEXITs but not only PML buffer full VMEXIT.
>> Is that really efficient? Flushing the buffer only as needed doesn't
>> seem to be a major problem (apart from the usual preemption issue
>> when dealing with guests with very many vCPU-s, but you certainly
>> recall that at this point HVM is still limited to 128).
>> Apart from these two remarks, the design looks okay to me.
> While keeping log-dirty radix tree more updated is probably irrelevant, 
> I do think we'd better to flush PML buffers in paging_log_dirty_op (both 
> peek and clear) before reporting dirty pages to userspace, in which case 
> I think flushing PML buffer at beginning of VMEXIT is a good idea, as 
> domain_pause does the job automatically. I am not sure how much cycles 
> will flushing PML buffer contribute but I think it should be relatively 
> small comparing to VMEXIT itself, therefore it can be ignored.

As far as my general thinking goes, this is the wrong attitude:
_Anything_ added to a hot path like VMEXIT processing should be
considered performance relevant. I.e. if everyone took the same
position as you do, we'd easily get many "negligible" additions, all
of which would add up to something no longer negligible.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.