Skip to content

Count lines in large files

I commonly work with text files of ~20 Gb size and I find myself counting the number of lines in a given file very often.

The way I do it now it’s just cat fname | wc -l, and it takes very long. Is there any solution that’d be much faster?

I work in a high performance cluster with Hadoop installed. I was wondering if a map reduce approach could help.

I’d like the solution to be as simple as one line run, like the wc -l solution, but not sure how feasible it is.

Any ideas?



Try: sed -n '$=' filename

Also cat is unnecessary: wc -l filename is enough in your present way.

User contributions licensed under: CC BY-SA
8 People found this is helpful