I’m trying to use __rdtscp intrinsinc function to measure time intervals. Target platform is Linux x64, CPU Intel Xeon X5550. Although constant_tsc flag is set for this processor, calibrating __rdtscp gives very different results: As we can see the difference between program executions can be up to 3 times (125-360). Such instability is not appropriate for any measurements. Here is
Tag: rdtsc
Getting cpu cycles using RDTSC – why does the value of RDTSC always increase?
I want to get the CPU cycles at a specific point. I use this function at that point: (editor’s note: “=A” is wrong for x86-64; it picks either RDX or RAX. Only in 32-bit mode will it pick the EDX:EAX output you want. See How to get the CPU cycle count in x86_64 from C++?.) The problem is that it
rdtsc accuracy across CPU cores
I am sending network packets from one thread and receiving replies on a 2nd thread that runs on a different CPU core. My process measures the time between send & receive of each packet (similar to ping). I am using rdtsc for getting high-resolution, low-overhead timing, which is needed by my implementation. All measurments looks reliable. Still, I am worried