I think awk is mostly only used by the 40+ set. It really is great for line-at-a-time. But so often I have to do things that at least slightly cross a line. Imagine an awk that was designed for json, say.
I think awk is mostly only used by the 40+ set. It really is great for line-at-a-time. But so often I have to do things that at least slightly cross a line. Imagine an awk that was designed for json, say.
On a case by case basis the data can be made to work better with awk, often boiling down to a few things (including writing emacs keyboard macros or functions). One is that jq -c ...will put a sequence of json objects each on one line.
But yeah, I don't use awk a lot anymore for json. Before json tools were very good I would do a simplistic conversion to xml and use xpath and then work from there. These days I often load json into postgresql and use that excellent toolset. Even that's probably only for the over 40's!
It really frustrates me that I don't know a command pipeline remotely as well as I do the Unix shell for, say, json. I mean, it's fine, I fire up Racket and code away, but there's got to be a better way. I should probably learn jq and friends and quit my bellyaching.
I'm careful to say "I don't know" as opposed to the proper Internet way of speaking, which is to say "there doesn't exist". (-:
I still use sed and awk on the command line, but with JSON I just end up live coding everything with an editor connected Lisp REPL 🤷🏻♂️