[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] about the funtion call memory_type_changed()



> >> >>  flush_all function will consume about 8 milliseconds, in my test
> >> > environment, the VM
> >> >>  has 4 VCPUs, the hvm_load_mtrr_msr() will be called four times,
> >> >> and totally
> >> > consumes
> >> >>  about 500 milliseconds. Obviously, there are too many flush_all calls.
> >> >>
> >> >>  I think something should be done to solve this issue, do you think so?
> >> >
> >> > The flush_all() cant be avoided completely, as it is permitted to
> >> > use sethvmcontext on an already-running VM.  In this case, the
> >> > flush certainly does need to happen if altering the MTRRs has had a
> >> > real effect on dirty cache lines.
> >
> > Yes, it's true. But I still don't understand why to do the flush_all
> > just when iommu_enable is true. Could you  explain why ?
> 
> The question you raise doesn't reflect what the function does: It doesn't
> flush when iommu_enabled, in calls
> p2m_memory_type_changed() in that case. And that operation is what
> requires a flush afterwards (unless we know that nothing can be cached yet),
> as there may have been a cachable -> non- cachable transition for some of
> the pages assigned to the guest.
> 
> The fact that the call is made dependent on iommu_enabled is simply
> because when that flag is not set, all memory of the guest is treated WB (as
> no physical device can be assigned to the guest in that case), and hence to
> type changes can occur.
> Possibly one could instead use need_iommu(d), but I didn't properly think
> through eventual consequences of doing so.

Jan, thanks for your explanation.

> >> Plus the actual functions calling memory_type_changed() in mtrr.c can
> >> also be called while the VM is already running.
> >>
> >> > However, having a batching mechanism across hvm_load_mtrr_msr()
> >> > with
> >> a
> >> > single flush at the end seems like a wise move.
> >>
> >> And that shouldn't be very difficult to achieve. Furthermore perhaps
> >> it
> > would
> >> be possible to check whether the VM did run at all already, and if it
> >> didn't
> > we
> >> could avoid the flush altogether in the context load case?
> >>
> >
> > I have write a patch according to your suggestions. But there is still
> > a lot of flush_all when the guest booting, and this prolong the guest
> > booting time about 600ms
> 
> Without you telling us where those remaining ones come from, I don't think
> we can easily help you.

When the guest booting, it will initialize the MTRR, and the MSR write will 
intercepted by Xen.
Because there are dozens of MSR MTRR operation in the guest, so the 
'mtrr_fix_range_msr_set',
'mtrr_def_type_msr_set'  and 'mtrr_var_range_msr_set ' will be called many 
times. And all of 
them will eventually call the flush_all, it's similar to ' hvm_load_mtrr_msr '. 
 It seems not easy to
use a batching mechanism in this case.

Each Flush_all will consume  5~8 ms,  it is a big problem if a  malicious guest 
frequently change
 the MTRR.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.