I noticed the io_uring kernel side uses CLOCK_MONOTONIC at CLOCK_MONOTONIC, so for the first timer, I get the time with both CLOCK_REALTIME and CLOCK_MONOTONIC and adjust the nanosecond like below and use IORING_TIMEOUT_ABS flag for io_uring_prep_timeout. iorn/clock.c at master · hnakamur/iorn
const long sec_in_nsec = 1000000000; static int queue_timeout(iorn_queue_t *queue) { iorn_timeout_op_t *op = calloc(1, sizeof(*op)); if (op == NULL) { return -ENOMEM; } struct timespec rts; int ret = clock_gettime(CLOCK_REALTIME, &rts); if (ret < 0) { fprintf(stderr, "clock_gettime CLOCK_REALTIME error: %sn", strerror(errno)); return -errno; } long nsec_diff = sec_in_nsec - rts.tv_nsec; ret = clock_gettime(CLOCK_MONOTONIC, &op->ts); if (ret < 0) { fprintf(stderr, "clock_gettime CLOCK_MONOTONIC error: %sn", strerror(errno)); return -errno; } op->handler = on_timeout; op->ts.tv_sec++; op->ts.tv_nsec += nsec_diff; if (op->ts.tv_nsec > sec_in_nsec) { op->ts.tv_sec++; op->ts.tv_nsec -= sec_in_nsec; } op->count = 1; op->flags = IORING_TIMEOUT_ABS; ret = iorn_prep_timeout(queue, op); if (ret < 0) { return ret; } return iorn_submit(queue); }
From the second time, I just increment the second part tv_sec and use IORING_TIMEOUT_ABS flag for io_uring_prep_timeout.
Here is the output from my example program. The millisecond part is zero but it is about 400 microsecond later than just second.
on_timeout time=2020-05-10T14:49:42.000442 on_timeout time=2020-05-10T14:49:43.000371 on_timeout time=2020-05-10T14:49:44.000368 on_timeout time=2020-05-10T14:49:45.000372 on_timeout time=2020-05-10T14:49:46.000372 on_timeout time=2020-05-10T14:49:47.000373 on_timeout time=2020-05-10T14:49:48.000373
Could you tell me a better way than this?
Advertisement
Answer
Thanks for your comments! I’d like to update the current time for logging like
ngx_time_update()
. I modified my example to use justCLOCK_REALTIME
, but still about 400 microseconds late. github.com/hnakamur/iorn/commit/… Does it meanclock_gettime
takes about 400 nanoseconds on my machine?
Yes, that sounds about right, sort of. But, if you’re on an x86
PC under linux, 400 ns for clock_gettime
overhead may be a bit high (order of magnitude higher–see below). If you’re on an arm
CPU (e.g. Raspberry Pi, nvidia
Jetson), it might be okay.
I don’t know how you’re getting 400 microseconds. But, I’ve had to do a lot of realtime stuff under linux, and 400 us is similar to what I’ve measured as the overhead to do a context switch and/or wakeup a process/thread after a syscall suspends it.
I never use gettimeofday
anymore. I now just use clock_gettime(CLOCK_REALTIME,...)
because it’s the same except you get nanoseconds instead of microseconds.
Just so you know, although clock_gettime
is a syscall, nowadays, on most systems, it uses the VDSO
layer. The kernel injects special code into the userspace app, so that it is able to access the time directly without the overhead of a syscall
.
If you’re interested, you could run under gdb
and disassemble the code to see that it just accesses some special memory locations instead of doing a syscall.
I don’t think you need to worry about this too much. Just use clock_gettime(CLOCK_MONOTONIC,...)
and set flags
to 0. The overhead doesn’t factor into this, for the purposes of the ioring
call as your iorn
layer is using it.
When I do this sort of thing, and I want/need to calculate the overhead of clock_gettime
itself, I call clock_gettime
in a loop (e.g. 1000 times), and try to keep the total time below a [possible] timeslice. I use the minimum diff between times in each iteration. That compensates for any [possible] timeslicing.
The minimum is the overhead of the call itself [on average].
There are additional tricks that you can do to minimize latency in userspace (e.g. raising process priority, clamping CPU affinity and I/O interrupt affinity), but they can involve a few more things, and, if you’re not very careful, they can produce worse results.
Before you start taking extraordinary measures, you should have a solid methodology to measure timing/benchmarking to prove that your results can not meet your timing/throughput/latency requirements. Otherwise, you’re doing complicated things for no real/measurable/necessary benefit.
Below is some code I just created, simplified, but based on code I already have/use to calibrate the overhead:
#include <stdio.h> #include <time.h> #define ITERMAX 10000 typedef long long tsc_t; // tscget -- get time in nanoseconds static inline tsc_t tscget(void) { struct timespec ts; tsc_t tsc; clock_gettime(CLOCK_MONOTONIC,&ts); tsc = ts.tv_sec; tsc *= 1000000000; tsc += ts.tv_nsec; return tsc; } // tscsec -- convert nanoseconds to fractional seconds double tscsec(tsc_t tsc) { double sec; sec = tsc; sec /= 1e9; return sec; } tsc_t calibrate(void) { tsc_t tscbeg; tsc_t tscold; tsc_t tscnow; tsc_t tscdif; tsc_t tscmin; int iter; tscmin = 1LL << 62; tscbeg = tscget(); tscold = tscbeg; for (iter = ITERMAX; iter > 0; --iter) { tscnow = tscget(); tscdif = tscnow - tscold; if (tscdif < tscmin) tscmin = tscdif; tscold = tscnow; } tscdif = tscnow - tscbeg; printf("MIN:%.9f TOT:%.9f AVG:%.9fn", tscsec(tscmin),tscsec(tscdif),tscsec(tscnow - tscbeg) / ITERMAX); return tscmin; } int main(void) { calibrate(); return 0; }
On my system, a 2.67GHz Core i7, the output is:
MIN:0.000000019 TOT:0.000254999 AVG:0.000000025
So, I’m getting 25 ns overhead [and not 400 ns]. But, again, each system can be different to some extent.
UPDATE:
Note that x86
processors have “speed step”. The OS can adjust the CPU frequency up or down semi-automatically. Lower speeds conserve power. Higher speeds are maximum performance.
This is done with a heuristic (e.g. if the OS detects that the process is a heavy CPU user, it will up the speed).
To force maximum speed, linux has this directory:
/sys/devices/system/cpu/cpuN/cpufreq
Where N
is the cpu number (e.g. 0-7)
Under this directory, there are a number of files of interest. They should be self explanatory.
In particular, look at scaling_governor
. It has either ondemand
[kernel will adjust as needed] or performance
[kernel will force maximum CPU speed].
To force maximum speed, as root, set this [once] to performance
(e.g.):
echo "performance" > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
Do this for all cpus.
However, I just did this on my system, and it had little effect. So, the kernel’s heuristic may have improved.
As to the 400us, when a process has been waiting on something, when it is “woken up”, this is a two step process.
The process is marked “runnable”.
At some point, the system/CPU does a reschedule. The process will be run, based upon the scheduling policy and the process priority in effect.
For many syscalls, the reschedule [only] occurs on the next system timer/clock tick/interrupt. So, for some, there can be a delay of up to a full clock tick (i.e.) for HZ
value of 1000, this can be up to 1ms (1000 us) later.
On average, this is one half of HZ
or 500 us.
For some syscalls, when the process is marked runnable, a reschedule is done immediately. If the process has a higher priority, it will be run immediately.
When I first looked at this [circa 2004], I looked at all code paths in the kernel, and the only syscall that did the immediate reschedule was SysV IPC, for msgsnd/msgrcv
. That is, when process A did msgsnd
, any process B waiting for the given message would be run.
But, others did not (e.g. futex
). They would wait for the timer tick. A lot has changed since then, and now, more syscalls will do the immediate reschedule. For example, I recently measured futex
[invoked via pthread_mutex_*
], and it seemed to do the quick reschedule.
Also, the kernel scheduler has changed. The newer scheduler can wakeup/run some things on a fraction of a clock tick.
So, for you, the 400 us, is [possibly] the alignment to the next clock tick.
But, it could just be the overhead of doing the syscall. To test that, I modified my test program to open /dev/null
[and/or /dev/zero
], and added read(fd,buf,1)
to the test loop.
I got a MIN:
value of 529 us. So, the delay you’re getting could just be the amount of time it takes to do the task switch.
This is what I would call “good enough for now”.
To get “razor’s edge” response, you’d probably have to write a custom kernel driver and have the driver do this. This is what embedded systems would do if (e.g.) they had to toggle a GPIO
pin on every interval.
But, if all you’re doing is printf
, the overhead of printf
and the underlying write(1,...)
tends to swamp the actual delay.
Also, note that when you do printf
, it builds the output buffer and when the buffer in FILE *stdout
is full, it flushes via write
.
For best performance, it’s better to do int len = sprintf(buf,"current time is ..."); write(1,buf,len);
Also, when you do this, if the kernel buffers for TTY I/O get filled [which is quite possible given the high frequency of messages you’re doing], the process will be suspended until the I/O has been sent to the TTY device.
To do this well, you’d have to watch how much space is available, and skip some messages if there isn’t enough space to [wholy] contain them.
You’d need to do: ioctl(1,TIOCOUTQ,...)
to get the available space and skip some messages if it is less than the size of the message you want to output (e.g. the len
value above).
For your usage, you’re probably more interested in the latest time message, rather than outputting all messages [which would eventually produce a lag]