[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] about the funtion call memory_type_changed()



>>> On 21.01.15 at 11:15, <liang.z.li@xxxxxxxxx> wrote:
>> >> I found the restore process of the live migration is quit long, so I
>> >> try to
>> > find out what's going on.
>> >> By debugging, I found the most time consuming process is restore the
>> >> VM's
>> > MTRR MSR,
>> >> The process is done in the function hvm_load_mtrr_msr(), it will call
>> >> the memory_type_changed(), which eventually call the time consuming
>> >> function flush_all().
>> >>
>> >> All this is caused by adding the memory_type_changed in your patch,
>> >> here is
>> > the link
>> >> http://lists.xen.org/archives/html/xen-devel/2014-03/msg03792.html,
>> >>
>> >> I am not sure if it's necessary to call flush_all, even it's
>> >> necessary,
>> > call the function
>> >>  hvm_load_mtrr_msr one time will cause dozens call of flush_all, and
>> >> each
>> > call of the
>> >>  flush_all function will consume about 8 milliseconds, in my test
>> > environment, the VM
>> >>  has 4 VCPUs, the hvm_load_mtrr_msr() will be called four times, and
>> >> totally
>> > consumes
>> >>  about 500 milliseconds. Obviously, there are too many flush_all calls.
>> >>
>> >>  I think something should be done to solve this issue, do you think so?
>> >
>> > The flush_all() cant be avoided completely, as it is permitted to use
>> > sethvmcontext on an already-running VM.  In this case, the flush
>> > certainly does need to happen if altering the MTRRs has had a real
>> > effect on dirty cache lines.
> 
> Yes, it's true. But I still don't understand why to do the flush_all just 
> when 
> iommu_enable is true. Could you  explain why ?

The question you raise doesn't reflect what the function does: It
doesn't flush when iommu_enabled, in calls
p2m_memory_type_changed() in that case. And that operation is
what requires a flush afterwards (unless we know that nothing can
be cached yet), as there may have been a cachable -> non-
cachable transition for some of the pages assigned to the guest.

The fact that the call is made dependent on iommu_enabled is
simply because when that flag is not set, all memory of the
guest is treated WB (as no physical device can be assigned to
the guest in that case), and hence to type changes can occur.
Possibly one could instead use need_iommu(d), but I didn't
properly think through eventual consequences of doing so.

>> Plus the actual functions calling memory_type_changed() in mtrr.c can also
>> be called while the VM is already running.
>> 
>> > However, having a batching mechanism across hvm_load_mtrr_msr() with
>> a
>> > single flush at the end seems like a wise move.
>> 
>> And that shouldn't be very difficult to achieve. Furthermore perhaps it 
> would
>> be possible to check whether the VM did run at all already, and if it didn't 
> we
>> could avoid the flush altogether in the context load case?
>> 
> 
> I have write a patch according to your suggestions. But there is still a lot 
> of flush_all 
> when the guest booting, and this prolong the guest booting time about 600ms.

Without you telling us where those remaining ones come from, I don't
think we can easily help you.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.