[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Question about high CPU load during iperf ethernet testing
On Wed, 24 Sep 2014, Iurii Konovalenko wrote: > Hi, Ian! > > On Tue, Sep 23, 2014 at 7:48 PM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote: > > On Mon, 2014-09-22 at 16:01 +0300, Iurii Konovalenko wrote: > > > >> > >> I decided to debug a bit, so I used "({register uint64_t _r; asm > >> volatile("mrrc " "p15, 0, %0, %H0, c14" ";" : "=r" (_r)); _r; })" > >> command to read timer counter before and after operations I want to > >> test. > > > > I think that is CNTPCT aka the physical timer. This is trapped under > > Xen. If you want an untrapped source of time you should use CNTVCT which > > is p15,0,c14. > > > > I expect the results are unreliable due to this. > > Thanks a lot for advice. > Arm docs say CNTPCT is p15,0,c14, CNTVCT is p15,1,c14. Now I tried both, > but results are almost equal. > > > Also watch out that the granularity of those two timers is usually far > > below the CPU clock frequency, so one tick of them can potentially > > represent quite a few clock cycles/instructions IIRC. > > I do not need to get very accurate value, just to understand place, > where time is spent. > These part of code is called many-many times, I accumulate all single > times, so approximately it seams good for me. > > I made some experiments, like: > - evaluated time of whole ethernet poll function > - broke this function into several parts and measured time of that > parts, then summed. > Values were almost the same value, so I decided, that in such way I > can understand which part spends more time, and which spends less. > > Please, could you advice method to accurate measure time in Xen? In the Xen hypervisor? You can simply call get_s_time(). In the guest kernel using CNTVCT is the best option as it is not trapped. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |