I have a bash file which is executed every 5 seconds, it goes like this: It pings to a server running in a neighbor Virtual Machine and records the curl response into the specified file “debian0_ping.txt” but the recorded response looks like this: Which is a problem because I need the pings responses to be recorded this way (the most
Tag: text
In bash how to move row field to column in a text file
I have a .txt file with this record: I would like to convert them like that: I have tried something like: or like but i can’t get it to work I would like the values to be transposed and that each tuple of values associated with a specific field is aligned with the others Any suggestions? Thank you in advance
Group By and Sum from .txt file in Linux
Following a suggestion from @tripleee, I’m posting another question for a coding issue I’m having trying to be more specific with sources and expected results. My source .txt file is quite over populated with lines and using AWK, I: extract only rows identified by a specific code. parse the content of the line in order to get only certain values.
Copying multiple text lines into a file after specified pattern using shell [closed]
Closed. This question needs details or clarity. It is not currently accepting answers. Want to improve this question? Add details and clarify the problem by editing this post. Closed 4 years ago. Improve this question I want to insert multiple line from file1 marked with pattern into a file2 using shell. The pattern is 10 numbers, always different input exmple:
How to skip multiple directories when doing a find
I’ve written a find function that searches for a string in each file in a given path, while skipping a list of directory names that I don’t want searched. I’ve placed this script in my .bashrc file to be called like so: The find portion works great, and it colorizes the search text so that it visually stands out!, but
Linux cat and less output for my text file is different from gedit and other gnome editor
I redirected ri’s ruby Array doc into a file but it didn’t look good in gedit. But text looks just fine in cli. That’s how my file looks in terminal editors. Everything is fine here. But when I open it in gedit or other gnome editors, that’s how it looks like. Some specific words look in absurd format. Any suggestions
How to find all files containing specific text (which includes a backslash)?
I thought I found the perfect answer with How do I find all files containing specific text on Linux?, so I tried it: None show, but I know there should have been a match. I also tried escaping the backslash to no avail. Also tried single quotes to no avail. How to find all files containing specific text which includes
How to format 2nd and 3rd lines on the first line?
I am using VIM, and I want to format 2nd and 3rd lines. I can do it by :2,3left<number of spaces> How to do it easier to get something like this? Answer try this pres Esc+: then press v then press enter , select the second and third lines , press = https://www.cs.swarthmore.edu/help/vim/reformatting.html
Shuffling a large text file without/with group order maintained
Instead of making a script, it there a one liner to shuffle a large tab separated text file, based on the unique elements in the first column. That means, for each unique element in the first column, number of rows will be equal and be specified by the user. There are two output possibilities, maintaining the row order or randomized
How to edit 300 GB text file (genomics data)?
I have a 300 GB text file that contains genomics data with over 250k records. There are some records with bad data and our genomics program ‘Popoolution’ allows us to comment out the “bad” records with an asterisk. Our problem is that we cannot find a text editor that will load the data so that we can comment out the