dq
: like jq
but the DSL is JS
jq
is great, but I can never remember how to use it.1 What if you could just write JS? I wrote a tiny tool dq
that lets you do that.2 Here’s an example from the jq
tutorial showing how to extract some details from a big JSON response:
curl 'https://api.github.com/repos/jqlang/jq/commits?per_page=5' |
jq '.[] | {message: .commit.message, name: .commit.committer.name}'
The .[]
thing is something like map.3 The variable-less .commit
property access notation always throws me off: what object are we talking about?
Here’s how it looks with dq
:
dq 'data.map(d => ({ message: d.commit.message, name: d.commit.committer.name }))'
data
is a pre-defined variable containing the parsed JSON. My version is a few characters longer, but it’s just JS! Don’t use something like map, just use map. If you know JS you can write this in your sleep.
Examples
Here are some examples from the help output:
$ echo '{ "a": 1 }' | dq data.a
1
$ cat package.json | dq 'Object.keys(data).slice(0, 5)'
[ "name", "type", "version", "scripts", "dependencies" ]
The -l/--lines
flag lets you process non-JSON text by splitting it on newlines so data
is an array of strings. Here I use it to count filenames that start with p
:
$ ls | dq -l "data.filter(f => f.startsWith('p')).length"
3
I’ve also included Remeda, a Lodash-like utility library I like a lot. It’s in scope as R
; you can just use it. Here we get counts for all the starting letters:
$ ls | dq -l "R.countBy(data, s => s[0])"
{ a: 1, d: 2, n: 1, p: 3, R: 1, s: 1, t: 2 }
Finally, here is a somewhat elaborate example from actual practice. I turned this into a shell alias I use all the time. I wrote it as a one-liner, but I split it into multiple lines here for readability.
$ npm outdated --long --json | dq '
> R.pipe(
> data,
> R.entries(),
> R.map(([k, v]) => [
> k,
> v.type.startsWith("dev") ? "dev" : "",
> v.current,
> v.wanted,
> v.latest
> ]),
> table
> )'
@tailwindcss/vite 4.0.11 4.0.12 4.0.12
@types/node dev 22.13.9 22.13.10 22.13.10
vite 6.2.0 6.2.1 6.2.1
table
is another helper in scope that formats a string[][]
into nice columns.
How it works
This is the core logic with some noise stripped out. It reads the data from stdin, sticks it in a variable called data
, eval
s the code you passed in, and prints the result.
// get code from positional args
const code = args._.join(' ').trim() || 'data'
// read data from stdin
const input = new TextDecoder().decode(await readAll(Deno.stdin))
// parse JSON (or split lines into array if --lines)
const data = args.lines ? input.trim().split('\n') : JSON.parse(input)
// run the passed-in code. `data is acessible from it
const result = eval(code)
console.log(result)
To make the script a global utility, my dotfiles install script creates a symlink to dq
in ~/.local/bin
(which is on my PATH
):
ln -sf "$PWD/bin/dq.ts" ~/.local/bin/dq
Is eval
dangerous?
I mean, you’re running commands on your own computer. There are a million ways you could burn the whole thing down. But check this out:
$ cat package.json | dq -l "Deno.readTextFileSync('a.txt')"
error: Uncaught (in promise) NotCapable: Requires read access to "a.txt",
run again with the --allow-read flag
const result = eval(code)
^
Because the script runs with Deno’s restrictive default permissions, it blows up if I try to do anything exciting. You could easily write this script with Node or Bun, but I don’t think the default permissions situation is as good in either of them.
Performance: surprisingly good
Performance is not a concern for the small JSON files I work with, but this is clearly not a high-performance tool if we have to read all of stdin into memory at once. That said, it’s not bad! I used dq
to make some JSON containing 1000 copies of a 500k package-lock.json
, resulting in a 435M file. Counting all the packages in all the copies takes about 3 seconds on my M1 MBP.
More surprising — and I hope someone will tell me if my jq
version is terrible — is that this is faster than jq
and uses less memory. There are definitely ways to make jq
faster, but I think it’s reasonable to compare naive approaches.
$ l package-lock.json
.rw-r--r--@ 560k david 5 Mar 17:15 -- package-lock.json
$ cat package-lock.json | time dq 'JSON.stringify(Array.from({ length: 1000 }).map(() => data))' > big.json
0.70s user 0.38s system 83% cpu 1.299 total
$ l big.json
.rw-r--r--@ 435M david 9 Mar 13:21 -N big.json
$ cat big.json | /usr/bin/time -l jq 'map(.packages | keys | length) | add'
1108000
5.29 real 3.99 user 0.92 sys
1734868992 maximum resident set size
...
2350212288 peak memory footprint
$ cat big.json | /usr/bin/time -l dq 'data.map(d => Object.keys(d.packages).length).reduce((a,b)=>a+b)'
1108000
2.60 real 3.15 user 0.76 sys
1430994944 maximum resident set size
...
1975573632 peak memory footprint