Skip to content
Advertisement

jq array of hashes to csv

So I have a data source like this:

{
  "things": [
    {
      "thing_name": "Bestco",
      "thing_id": 1
    },
    {
      "thing_name": "GreatCo",
      "thing_id": 2
    },
    {
      "thing_name": "DressCo",
      "thing_id": 3
    }
  ]
}

I want to get output like this:

$ echo '{"things":[{"thing_name":"Bestco","thing_id":1},{"thing_name":"GreatCo","thing_id":2},{"thing_name":"DressCo","thing_id":3}]}' |
  jq -r '.things | map(.thing_name, .thing_id, "n") | @csv' |
  sed -e 's/,"$//g' -e 's/^",//g' -e 's/^"$//g'
"Bestco",1
"GreatCo",2
"DressCo",3

$ 

Using a fake parameter seems like a hack and then needs to be clean up by sed to work. How do I do this with just jq.

Advertisement

Answer

Instead of trying to put literal newlines in your data, split the data into separate arrays (one per line of desired output), and pass each one to @csv.

s='{"things":[{"thing_name":"Bestco","thing_id":1},{"thing_name":"GreatCo","thing_id":2},{"thing_name":"DressCo","thing_id":3}]}'

jq -r '.things[] | [.thing_name, .thing_id] | @csv' <<<"$s"

…properly emits:

"Bestco",1
"GreatCo",2
"DressCo",3
User contributions licensed under: CC BY-SA
4 People found this is helpful
Advertisement