Skip to content
Advertisement

Tag: performance

Writing a small file blocks for 20 ms

I discovered that on my Ubuntu 22 server, attempting to write to a file, often induces around 20ms delay, even when only writing a few bytes. Here is some basic code that demonstrates the problem: And here is the output: It seems more likely to happen if there is a bit of delay between attempts, and also more likely to

understand sysstat sar memory output

I’m preparing for more traffic in the days to come, and I want to be sure server can handle it. Running sar -q, the load of “3.5” doesn’t seem much on 32 CPU architecture: However, I’m not sure about the memory. Running sar -r shows 98.5% for the %memused and only 13.60 for %commit: running htop seems OK too: 14.9G/126G.

Usage of getc with a file

To print the contents of a file one can use getc: How efficient is the getc function? That is, how frequently does it actually do operating system calls or something that would take a non-trivial amount of time? For example, let’s say I had a 10TB file — would calling this function trillions of times be a poor way to

SSD vs. tmpfs speed

I made a tmpfs filesystem in my home directory on Ubuntu using this command: Then I wrote this Python program: The result: I am confused about this result. Isn’t the tmpfs a file system based on RAM and isn’t RAM supposed to be notably faster than any hard disk, including SSDs? Furthermore, I noticed that this program is using over

Searching a text file backwards from the end

I’m trying to find the string containing the substring in a text file by starting at the end. The file has tens of millions of lines. (The requirement is to read from End of the File. I cannot use sed/awk/grep etc) The below program does the job but it takes a long time. How can I make it run faster?

Advertisement