Skip to content
Advertisement

Bash – if statement not automatically operating

I’m having a very odd problem with a bash script, written on Ubuntu 17.04 machine.

I have a txt file that contains information about people in this fashion:

number name surname city state

With these infos I have to create an organization system that works by state. For example, with a list like this

123 alan smith new_york NEW_YORK

123 bob smith buffalo NEW_YORK

123 charles smith los_angeles CALIFORNIA

123 dean smith mobile ALABAMA

the outcome at the end of the computation should be three new files named NEW_YORK, CALIFORNIA and ALABAMA that contain people who live there.

The script takes as a parameter the list of names. I implemented an if statement (which condition is dictated by a test of existance for the file, just in case there’s more people living in a certain state) inside a for loop that, oddly, doesn’t operate unless I press enter while the program is running. The outcome is right, I get the files with the right people on them, but it baffles me that I have to press enter to make the code work, it doesn’t make sense to me.

Here’s my code:

#!/bin/bash

clear

#finding how many file lines and adding 1 to use the value as a counter later
fileLines=`wc -l addresses | cut -f1 --delimiter=" "`
(( fileLines = fileLines+1 ))

for (( i=1; i<$fileLines; i++ ))
do    
    #if the file named as the last column already exists do not create new one
    test -e `head -n$i | tail -n1 | cut -f5 --delimiter=" "`
    if [ $? = 0 ]
    then
        head -n$i $1 | tail -n1 >> `head -n$i $1 | tail -n1 | cut -f5 --delimiter=" "`
    else
        head -n$i $1 | tail -n1 > `head -n$i $1 | tail -n1 | cut -f5 --delimiter=" "`
    fi
done

echo "cancel created files? y/n"
read key

if [ $key = y ]
then
    rm `ls | grep [A-Z]$`
    echo "done"
    read
else
    echo "done"
    read
fi

clear

What am I doing wrong here? And moreover, why doesn’t it tells me that something’s wrong (clearly there is)?

Advertisement

Answer

The immediate problem (pointed out by @that other guy) is that in the line:

test -e `head -n$i | tail -n1 | cut -f5 --delimiter=" "`

The head command isn’t given a filename to read from, so it’s reading from stdin (i.e. you). But I’d change the whole script drastically, because you’re doing it in a very inefficient way. If you have, say, a 1000-line file, you run head to read the first line (3 times, actually), then the first two lines (three times), then the first three… by the time you’re done, head has read the first line of the file 3000 times, and then tail has discarded it 2997 of those times. You only really needed to read it once.

When iterating through a file like this, you’re much better off just reading the file line-by-line, with something like this:

while read line; do
    # process $line here
done <"$1"

But in this case, there’s an even better tool. awk is really good at processing files like this, and it can handle the task really simply:

awk '{ if($5!="") { print $0 >>$5 }}' "$1"

(Note: I also put in the if to make sure there is a fifth field/ignore blank lines. Without that check it would’ve just been awk '{ print $0 >>$5 }' "$1").

Also, the command:

rm `ls | grep [A-Z]$`

…is a really weird and fragile way to do this. Parsing the output of ls is generally a bad idea, and again there’s a much simpler way to do it:

rm *[A-Z]

Finally, I recommend running your script through shellcheck.net, since it’ll point out some other problems (e.g. unquoted variable references).

User contributions licensed under: CC BY-SA
10 People found this is helpful
Advertisement