Skip to content
Advertisement

How to get logs of individual argument passed to shell script from a file

I have a shell script. In this script I am reading table names for a file and executing a command.

The script is working fine. I am able execute the command for all the tables in the file.

shell script

#!/bin/bash

[ $# -ne 1 ] && { echo "Usage : $0 input file "; exit 1; }
args_file=$1

TIMESTAMP=`date "+%Y-%m-%d"`
touch /home/$USER/logs/${TIMESTAMP}.success_log
touch /home/$USER/logs/${TIMESTAMP}.fail_log 
success_logs=/home/$USER/logs/${TIMESTAMP}.success_log
failed_logs=/home/$USER/logs/${TIMESTAMP}.fail_log

#Function to get the status of the job creation
function log_status
{
       status=$1
       message=$2
       if [ "$status" -ne 0 ]; then
                echo "`date +"%Y-%m-%d %H:%M:%S"` [ERROR] $message [Status] $status : failed" | tee -a "${failed_logs}"
                #echo "Please find the attached log file for more details"
                #exit 1
                else
                    echo "`date +"%Y-%m-%d %H:%M:%S"` [INFO] $message [Status] $status : success" | tee -a "${success_logs}"
                fi
}

while read table ;do 
  spark-submit hive.py $table 
done < ${args_file}

g_STATUS=$?
log_status $g_STATUS "Spark ${table}"

In this script I want to collect status logs and stdout logs. I want to collect the logs for each table in the file individually.

I want to know if the execution of spark-submit has been successful or failed for each table in the file. Say the status logs

How can I collect stdout files for each table individually and store them at a location in Linux.

What are the changes I need to do to achieve my results.

Advertisement

Answer

Make sure just to re-direct (stdout) of the logs generated for each of the table instance in your script to a folder under /var/log/ may be call it as myScriptLogs

mkdir -p /var/log/myScriptLogs || { echo "mkdir failed"; exit; }

while read -r table ;do 
  spark-submit hive.py "$table" > /var/log/myScriptLogs/"${table}_dump.log" 2>&1 
done < "${args_file}" 

The script will fail if you are not able to create a new directory using mkdir for some reason. So this creates a log for each table being processed under /var/log as <table_name>_dump.log which you can change it to however way you want.

Couple of best practices would be to use -r flag in read and double-quote shell variables.


Answer updated to redirect stderr also to the log file.

User contributions licensed under: CC BY-SA
3 People found this is helpful
Advertisement