Skip to content
Advertisement

What is the true getrusage resolution?

I’m trying to measure getrusage resolution via simple program:

#include <cstdio>
#include <sys/time.h>
#include <sys/resource.h>
#include <cassert>

int main(int argc, const char *argv[]) {
    struct rusage u = {0};
    assert(!getrusage(RUSAGE_SELF, &u));
    size_t  cnt = 0;
    while(true) {
        ++cnt;
        struct rusage uz = {0};
        assert(!getrusage(RUSAGE_SELF, &uz));
        if(u.ru_utime.tv_sec != uz.ru_utime.tv_sec || u.ru_utime.tv_usec != uz.ru_utime.tv_usec) {
            std::printf("u:%ld.%06ldtuz:%ld.%06ldtcnt:%ldn", 
                    u.ru_utime.tv_sec, u.ru_utime.tv_usec,
                    uz.ru_utime.tv_sec, uz.ru_utime.tv_usec,
                    cnt);
            break;
        }
    }
}

And when I run it, I usually get output similar to the following:

ema@scv:~/tmp/getrusage$ ./gt
u:0.000562  uz:0.000563 cnt:1
ema@scv:~/tmp/getrusage$ ./gt
u:0.000553  uz:0.000554 cnt:1
ema@scv:~/tmp/getrusage$ ./gt
u:0.000496  uz:0.000497 cnt:1
ema@scv:~/tmp/getrusage$ ./gt
u:0.000475  uz:0.000476 cnt:1

Which seems to hint that the resolution of getrusage is around 1 microsecond. I thought it should be around 1 / getconf CLK_TCK (i.e. 100hz, hence 10 millisecond).

What is the true getrusage resolution?
Am I doing anything wrong?

Ps. Running this on Ubuntu 20.04, Linux scv 5.13.0-52-generic #59~20.04.1-Ubuntu SMP Thu Jun 16 21:21:28 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux, 5950x.

Advertisement

Answer

The publicly defined tick interval is nothing more than a common reference point for the default time-slice that each process gets to run. When its tick expires the process loses its assigned CPU which then begins executing some other task, which is given another tick-long timeslice to run.

But that does not guarantee that a given process will run for its full tick. If a process attempts to read() an empty socket, and has nothing to do in a middle of a tick the kernel is not going to do nothing with the process’s CPU, and find something better to do, instead. The kernel knows exactly how long the process ran for, and there is no reason whatsoever why the actual running time of the process cannot be recorded in its usage statistics, especially if the clock reference used for measuring process execution time can offer much more granularity than the tick interval.

Finally the modern Linux kernel can be configured to not even use tick intervals, in specific situations, and its advertised tick interval is mostly academic.

Advertisement