[luau] getting system time in milisecs
Lockhart, Charles
Charles.Lockhart at SpirentCom.COM
Wed Sep 18 19:43:01 PDT 2002
> Do you really need to check it per iteration? I mean, in your case,
> could you move the timing outside
> the loop and divide the result by the number of iterations?
> If you do
> that, then you will get an average
> iteration which will minimize the time spent timing.
I need to be able to perform each cycle within a very small period of time
on average, and guarantee that the period of a single cycle doesn't exceed a
certain threshold.
> Not quite sure what you mean by "debug". You mean the timing code is
> skewing your results?
The timing code isn't necesary for the operation, just in order to
characterize the operation. The extra overhead impacts the overall period.
> Here's an idea: say get_time() is your timing function (be it from
> gettimeofday() or clock() or whatever),
> before you start your loop do a little profiling of the
> timing function,
> something like
>
> before = get_time();
> for (i = 0, total = 0; i < 10000; i++) {
> get_time();
> }
> timing_cost = (get_time() - before) / 10000
>
> Then you have a rough idea of what get_time() is going to take during
> your loop, so you can
> subtract it out from the results.
Jeez, I hadn't thought of that. Thanks, I'll try that tomorrow.
> On a side note, there is a syscall called times() which fills a
> structure that contains clock ticks since the
> start of the program. It has two fields. One of the fields is the
> clock ticks spent in kernel-space and the other is
> the clock ticks spent in userspace. If you might be able to
> use it to
> isolate the times from parts of you program,
> too.
Cool, thanks Ray.
-Charles
More information about the LUAU
mailing list