[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PML (Page Modification Logging) design for Xen



> From: Kai Huang [mailto:kai.huang@xxxxxxxxxxxxxxx]
> Sent: Thursday, February 12, 2015 10:50 AM
> 
> >> - PML buffer flush
> >>
> >> There are two places we need to flush PML buffer. The first place is PML
> >> buffer full VMEXIT handler (apparently), and the second place is in
> >> paging_log_dirty_op (either peek or clean), as vcpus are running
> >> asynchronously along with paging_log_dirty_op is called from userspace
> via
> >> hypercall, and it's possible there are dirty GPAs logged in vcpus' PML
> >> buffers but not full. Therefore we'd better to flush all vcpus' PML buffers
> >> before reporting dirty GPAs to userspace.
> >>
> >> We handle above two cases by flushing PML buffer at the beginning of all
> >> VMEXITs. This solves the first case above, and it also solves the second
> >> case, as prior to paging_log_dirty_op, domain_pause is called, which kicks
> >> vcpus (that are in guest mode) out of guest mode via sending IPI, which
> cause
> >> VMEXIT, to them.
> >>
> >> This also makes log-dirty radix tree more updated as PML buffer is flushed
> >> on basis of all VMEXITs but not only PML buffer full VMEXIT.
> > Is that really efficient? Flushing the buffer only as needed doesn't
> > seem to be a major problem (apart from the usual preemption issue
> > when dealing with guests with very many vCPU-s, but you certainly
> > recall that at this point HVM is still limited to 128).
> >
> > Apart from these two remarks, the design looks okay to me.
> While keeping log-dirty radix tree more updated is probably irrelevant,
> I do think we'd better to flush PML buffers in paging_log_dirty_op (both
> peek and clear) before reporting dirty pages to userspace, in which case
> I think flushing PML buffer at beginning of VMEXIT is a good idea, as
> domain_pause does the job automatically. I am not sure how much cycles
> will flushing PML buffer contribute but I think it should be relatively
> small comparing to VMEXIT itself, therefore it can be ignored.

it's not intuitive to add overhead (one extra vmread) to every vmexit
just for utilizing the side-effect of one specific exit due to domain_pause.

> 
> An optimized way probably is we only flush PML buffer for external
> interrupt VMEXIT, which domain_pause really triggers, but not at
> beginning of all VMEXITs. But as log as the overhead of flush PML buffer
> is negligible, this optimization is also unnecessary.
> 

this optimization is not real optimization as you still stick to implementation
detail of other operations. If you really want to take use of domain_pause,
piggyback PML flush explicitly in that path make things clearer.

Thanks
Keivn

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.