Skip to content
Advertisement

Bash loop only read the last line

I have problems trying to extract data behind colons in multiple lines using while loop and awk. This is my data structure: What I want to get is the BioSample ID, which is like SAMD00019077. Scripts I tried: while read line ; do echo $line | awk -F’:’ ‘{print $3}’ > 1.tmp2 ; done < 1.tmp for line incat 1.tmp;

Problem inserting on first match only using GNU sed

4 on my linux machine (I checked w/ sed –version). Currently, I have a myfile.txt with the following content: I know in GNU sed, I can append after the first occurrence of a match if I prepend the sed command with 0,. So, if I want to insert goodbye after the first occurrence of —, I can do that: expected/correct

Why can’t DataNode download file?

It’s very strange.I have seen NameNode and DataNode that they have already started in jps command.I can go into the NameNode WebSite(50070) and use “hdfs dfs -get” to get file.But I can’t download file from the NameNode WebSite. Answer The problem is because the /etc/hosts file.I configured the relation of the hostname and IP. IP is 127.0.0.1. So hadoop use

How to split file based on first character in Linux shell

I do have a fixedwidth flatfile with the header and detail data. Both of them can be recognized by the first character: 1 for header and 2 for detail. I want to genrate 2 different files from my fixedwidth file , each file having it’s own record set, but without type record written. File Header.txt having only type 1 records.

Update data in source file with random data from another file

I have data in source file as below (file.txt) Input command: (N4 = segment Identifier, 1= position , ref.txt=reference file) ref.txt has data as below I have below code which displays the data in position x(input) for N4 Now how can i integrate ref.txt in above code to update WALTER and JESSI in file.txt with random text located in ref.txt

hard links refererring to a file ant stat st_nlink don’t match

Using Ubuntu 18.04 bash, if I list all files that share the same specific inode 4 with: I can see different values of the number of hard link for that same specific inode (=4). The same occur if I do it with C code. For other inodes I get the correct identical hard links values. What is the problem with

Split CSV file in bash into multiple files based on condition

My csv file has multiple rows of data and I want to split it into multiple files based on one attribute. SQL code with ORDER BY ID is triggered from beeline which creates single CSV. cat sql.csv “attr;attr;ID;attr” “data;data;XXXX;date” “data;data;XXXX;date” “data;data;YYYYY;date” “data;data;YYYYY;date” “data;data;BBBBB;date” “data;data;BBBBB;date” Desired result is to split once new ID is recognised and use that ID in filename.

Advertisement