I want to create a wrapper script which writes logs of arguments, stdin and stdout. I have written the following script wrapper.sh, which works almost fine. I expect that ./wrapper.sh arg1 arg2 gives the same result as /path/to/command arg1 arg2 with logs in /tmp/stdio-log/. But it gives a slightly different result in Example 2 below. Example 1: a command that
Tag: bash
How to include comments from Nmap script in the output file
There might be a better way to do this, but what I have is an input file called “list.txt” of IP addresses for my Nmap scan like so: Then I scan with Nmap using and output to a file using: Additionally I have used sed to make the “output.txt” look like this: I would like to include the comments from
Stop Java from killing Bash script that started it
I’ve spent the past couple days working on this, and at this point I am super stuck. I have a Java program that must be run not as a service. This program must also be capable of updating itself when a new file is given for updating. As a result, I have a script that is started with Linux that
Use values in a column to separate strings in another column in bash
I am trying to separate a column of strings using the values from another column, maybe an example will be easier for you to understand. The input is a table, with strings in column 2 separated with a comma ,. The third column is the field number that should be outputted, with , as the delimited in the second column.
How can I execute one command from cmd while executing a script in WSL?
I have a build script I’m using in a project I’m currently working on. I found that a certain command only works from cmd and not WSL, but I want to continue to work in WSL. I have something like this: Say command2 only works in cmd. How can I make this script switch to cmd, execute a command and
the best way to get “find” style output from “ls -fR”
My goal is to find the fastest way to list all available files in a directory (call it the master directory). The master directory contains about 5 million files, organized using subdirectories but it’s unclear how subdirectories are arranged. After some research I realized that the fastest way to do so is using ls -fR (-f disables sorting) The default
Run a command on a list of files
The command shred in ubuntu does not shreds files recursively. Hence, I wanted to list all the files in a directory by doing find -L and then shred these files using shred. However, find -L | shred does not works. Can someone please help me do so? Thanks in advance. Answer find | shred works as if you ran just
copy all copyQ tab names to a tab name tabs
i want to copy all existing tab names inside copyQ as items in a tab named tabs. my prototype producing duplicates: ./tabs.sh ; copyq tab > tabs.sh ; sed -i ‘s/.*/copyq tab tabs add “&”/’ tabs.sh ; ./tabs.sh i read: https://copyq.readthedocs.io/en/latest/scripting.html#working-with-tabs Answer the copyq removetab tabName removes the tab ( means also all items of the tab). If the tab
How to create file with folders for all users in home directory
So for example we have 4 users in /home directory: What I am trying to achieve is that I create directories with files inside for all these users. For one user I can try something like: mkdir -p /home/user/dir/anotherdir && touch /home/user/dir/anotherdir/somefile. But I want a dynamic solution when I don’t know how many users are and nor their names.
Why cant I get mv to rename and copy file?
The last thing I want to do in this script is take the userinput to rename the corresponding files to include .bak extension rather then .txt which will copy into a backup directory. I keep receiving an error messaging saying e.g. The snippet in question (right at the bottom of full code): Full code: Answer The correct one would be