Almost anytime I use a pipe now I think of the last line from your ode to pipe and chuckle:
"I hope this post helped you learn something, if not, just pipe it to/dev/null."
There’s a handful of indispensable command line tools, and I think jq is the only one I can think of that has appeared in my career, which tells you how young I still am.
also try github.com/wting/autojump it works with multiple directories so I can type "j d8 d" to jump to /mnt/c/www/d8smartsheetweb/docroot -- I can't just type j docr or something because I have many docroots :)
awk and jq as well. I think the first is a staple (picture related) and the later a result of so much YAML everywhere, especially on that thing we all know.
I had to fix a bash script an ex VP-of-platform at my last company wrote that was literally “sed to remove line N to line M from this systemd unit file” and I’ve never wanted to strangle someone I couldn’t physically reach more in my life.
I use them in different ways. I mostly use comm with sort when processing lists of things to feed a tool like xargs (clever use of uniq often works here as well). I use diff for visual inspection or to feed patch.
I raise you ripgrep (rg). It has a lot of similar convenience behaviour like respecting .gitignore. It’s written in Rust and incredibly fast. I use it to search a 6 GB code base with regular expressions and it flies.
I use pt even though I know ripgrep is superior.
I'm just a Go fanboy.
I still haven't finished the article about ripgrep though, it was very interesting.
Why does that page say GNU grep doesn't support "ignore case", "print only filenames that contain matches", or "print lines of context before/after matches"?
It looks like a bunch of ack's features can be boiled down to having a built in "find".
Alias and vim
On Friday I tried to fix a bad Ubuntu install that was missing EVERYTHING. All my favorite keystrokes were missing and I couldn’t use alias to have it remember them - so here I was typing in all my favs like a monster.
Then I had to use nano :(
less and grep (or the fancy modern replacements like ag), but limiting to two is a bit unfair I think because ls, cp, mv etc are so fundamental, I couldn't do anything without these either. Or good old make or even dd of course. :)
Can't live without zsh, git, grep, awk, sed, xargs, httpie, head, tail, vim, visidata, less, and probably many more. Also, can't pick any two from them.
I wish that all these CI tools took a page from make and make like tools and made dependencies a first class part of their language.
I've been thinking about this a lot with GitHub actions that are limited in some fairly sad ways.
Depends on the day.
Finding weak systems - Nmap and msfconsole
Fixing permissions (Folders and file shares) - icacls and for
Daily uses - get-aduser and set-aduser
It's like choosing your favorite child.... But I find I get really frustrated when a system has more and not less. To clarify, I mean less. I need less. Like the command less. Darn geeks and their names.
I'll file more than 2 pairs but in different groups
openssl: check web certs, random passwords / wget
sshfs/openvpn: all your base belong to u
git/ls because symlinking dotfiles dies
my tool for storing shell history in a database:}
Mine: history and grep
Cause I am too lazy to execute previous long commands with params
usage: $ history | grep (some-word-in-previous-command-to-executed)
Yeah I was trying to decide if Perl was cheating. I guess Perl and “grep -P” for me but only because I run emacs in X so it’s not a command line tool ;)
My bad. Emacs is an operating system and Perl is a religion.
(Or did I get that backwards?)
...that said Perl is incredibly helpful as a command line Swiss army chainsaw.
I wish I knew awk better, it seems incredibly useful, any tips or script snippets that are some of your most useful? My two tools would be grep (yeah I know) and diff
Here's one way of adding custom command line options to Awk scripts...
#!/usr/bin/awk -f
BEGIN {
for (i = 1; i < ARGC; i++) {
if (ARGV[i] ~ "^--flag=") {
_flag = substr(ARGV[i], 8)
delete ARGV[i]
}
}
}
It be possible to call other executables from within Awk scripts...
#!/usr/bin/awk -f
BEGIN {
_path = "~/"
cmd = "ls " _path
while ( (cmd | getline _line) > 0) {
print _line
}
close(cmd)
}
... which may or may not help with code reuse.
My response before reading the replies to this is
ag: github.com/ggreer/the_sil…
tig: github.com/jonas/tig
but I'm pretty sure I might have found some new necessities in this thread. Thanks!
It’s so funny…awk is on my list too….but I almost exclusively use it as a wildly overblown tool for printing space separated values in a pipeline…since I can never recall how to do that well with anything else.
Cut is the exact one that messes me up...awk defaults to splitting on almost any generic white space (tabs, etc)...cut doesn’t and requires shenanigans to make it work.
Definitely xargs and jq. Honorable mention to curl and sometimes perl (it’s legit as a command line tool). I’m sad to say I’ve never taken the time to learn awk
If you want something similar that comes with libraries and a formal specification for the query language, check out jmespath.org. Didn’t know it exists before I joined @awscloud ...
I thought about this a lot. First thing that came to mind was "ls". Then I thought it was git. Turns out my favorite is "alias" :D
gist.github.com/abhiramr/6d4a3…
I'm partial to lsd in this ^
using: history|cut -c8-|cut -f1 -d' '|sort|uniq -c|sort -n
git and ls
2nd and 3rd were dot and launchctl, but that's because it takes 5x as many tries to get them to work for me
make and grep
I wish I could jq everything though! Need to find more tools for beautifying stdout without having to open up vim everytime (love it but not for just reading files)
grep and sed. Embarrassingly, i only use awk for splitting strings and printing the output.
In 0th place is obviously sh, becaus would could live without scraping the output of a subprocess!
Both of those seem like they point to a fundamental shortcoming in the shell. In bash I feel like I spend way too much time trying to parse and capture output. That is why I have been considering switching to @PowerShell_Team
The only way I could accept the end of a vi/Emacs flame war ending with a single comment is if Emacs stood for “Epstein murder a certainty, sweetheart”
In bash, C-r to search through history, and "readlink -f" because why risk passing in a relative path as an arg when you can use a fully qualified one?
I taught myself awk to the point of learning how to write things that looked more like Perl with an odd lack of sigils in it... then basically have only used bash+sed in its place, since.
I think I have to go with jq and bup, because more structured data is Good.
Honestly though, I'm not sure this is a category that can be honestly narrowed down to two entries. find (or fd or fselect), grep (or ack or rg), sed, xargs, so many things in moreutils and coreutils…
The command line is an ecosystem where tools standing alone don't thrive.
Judging by the commit history z is a re-implementation of another one he made called j which is an re-implementation of autojump, and he's also made another re-implementation called j2.
Busy chap!
avoiding the “obvious” — although i’m more likely to chain cut, sort, uniq, sed, grep, or write perl, than I am to write awk…
I’m gonna submit “pv” and “ack”/“ag”/“ack-grep”.
Anyone know an improved diff utility for directory trees that can easily ignore file patterns? Bonus if it integrates with vim -d. Lately I am merging a set of terraform modules/Ansible roles between projects which have been improved in each.
Speaking of which.. I'm trying to use sed command on azure DevOps pipeline, no success :(. (with PS I didn't get error but the file not change either). Some tip please? ☺️
I'm a sysadmin so I'm tempted to say something like strace and tcpdump, but if you snooped on my shell history over the past half a decade the truth would probably be ansible-playbook and vim 👀
seq & find. The former I like a lot in for loops.
This also made me interested in what I actually use the most, and my zsh history is 10K lines so probably pretty representative:
Generated by linux.byexamples.com/archives/332/w….
I suspect `ls` would be at the top of the list, if it weren't for zsh automatically doing ls after cd.
at the moment it is `git` and `make`. While I like git, I surely do not enjoy make that much, but it is essential to simplify the build process around our apps (Go-based). personal favourites would still be mc, tmux, zsh, ssh, rsync, etc.
['grep', 'awk']; The most common patterns I'm using on the CLI are "for x in $(command | egrep -i 'something' | awk -F',' '{ print $2 }' | returns-single line strings); do action ${x}; sleep 1; done" & putting hosts in a file w/ansible alias to run sudo '${x}' on all.
Professionally gw & docker. Others git / ctrl+r+r (history). I'm confused though...are they tools, cli, just commands...because I learned the terminology from dos in 1997, when the 5.25" floppy disk had to be inserted to boot up. They used to call it command.