Running cPanel on a server with various customer accounts under the /home
directory.
Many customers’ error_log
files are exceeding a desired size (let’s say 100MB) and I want to create a cron job to run daily to truncate any files over a certain size.
I know truncate can shrink files but it will extend files if they’re smaller than the stipulated amount, so does my solution below (of first finding all files above the desired size and only shrinking those) make the most sense and will it work?
for i in $(find /home -type f -iname error_log -size +99M); do truncate -s 100M $i done
Advertisement
Answer
I’d suggest rotating and compressing logs rather than truncating them. Logs typically compress really well, and you can move the compressed logs to backup media if you like. Plus, if you do have to delete anything, delete the oldest logs, not the newest ones.
That said, for educational purposes let’s explore truncate
. It has the ability to only shrink files, though it’s buried in the documentation:
SIZE may also be prefixed by one of the following modifying characters: ‘+’ extend by, ‘-‘ reduce by, ‘<‘ at most, ‘>’ at least, ‘/’ round down to multiple of, ‘%’ round up to multiple of.
If the files are at a fixed depth you don’t need the loop nor the find
call. A simple glob will do:
truncate -s '<100M' /home/*/path/to/error_log
If they’re at unpredictable depths you can use extended globbing…
shopt -s extglob truncate -s '<100M' /home/**/error_log
…or use find -exec <cmd> {} +
, which tells find
to invoke a command on the files it finds.
find /home -name error_log -exec truncate -s '<100M' {} +
(If there are lots and lots of files find
is safest. The glob options could exceed Linux’s command-line length limit whereas find
guards against that possibility.)