30 Linux Commands Every Developer Actually Uses
Practical Linux commands organized by real-world scenarios — files, text, processes, networking, and disk management.

There's no shortage of "Linux commands cheat sheets" online, yet you still end up googling the same things over and over. Most of those lists go A-to-Z like a textbook. That's reference material, not a practical guide.
This is organized by situation instead. "I need to do X — what's the command?" Quick lookup, no filler.
Files and Directories
1. ls — list contents
ls -la # include hidden files, detailed info
ls -lh # human-readable file sizes (KB, MB)
ls -lt # sort by modification time
-l is basically always attached. -a is essential when you need to check .env or other dotfiles.
2. cd — change directory
cd ~ # home directory
cd - # jump back to previous directory
cd ../.. # two levels up
cd - is underrated. When you're bouncing between two directories, it saves a lot of typing.
3. cp, mv, rm — copy, move, delete
cp -r src/ backup/ # copy directory recursively
mv old-name.txt new.txt # rename (same command as move)
rm -rf node_modules/ # force delete directory (careful!)
rm -rf deletes without confirmation. Always double-check the path. Especially dangerous with variables — rm -rf $DIR/ with an empty $DIR is a horror story waiting to happen.
4. mkdir — create directories
mkdir -p src/components/ui # create nested directories in one go
Without -p, you get an error if intermediate directories don't exist.
5. find — locate files
find . -name "*.log" # find .log files
find . -name "*.tmp" -delete # find and delete
find . -type f -mtime -1 # files modified in the last day
find . -size +100M # files larger than 100MB
find with combined flags is powerful. Quote the -name pattern so the shell doesn't expand the wildcard prematurely.
Working with File Contents
6. cat — print file contents
cat config.yml # print entire file
cat file1.txt file2.txt # concatenate and print multiple files
Good for short files. For anything long, use less.
7. less — paginated viewing
less /var/log/syslog
Navigate with j/k, search with /pattern, quit with q. If you know vim keybindings, you already know less.
8. head, tail — view start/end of files
head -n 20 file.txt # first 20 lines
tail -n 50 file.txt # last 50 lines
tail -f /var/log/app.log # follow log output in real time
tail -f is the standard way to monitor live server logs. Ctrl+C to stop.
9. grep — search text
grep "error" app.log # search in a file
grep -r "TODO" src/ # recursive directory search
grep -i "warning" log.txt # case-insensitive
grep -n "function" script.js # show line numbers
grep -c "404" access.log # count matches
One of the most-used commands in day-to-day development. -r and -n together is a common combo. For faster searches, check out ripgrep (rg).
10. wc — count lines/words/bytes
wc -l file.txt # line count
find src/ -name "*.ts" | wc -l # count TypeScript files
11. sort, uniq — sort and deduplicate
sort access.log | uniq -c | sort -rn | head -20
This pipeline finds the most frequent patterns in a log. Top requested URLs, most common IPs — that kind of thing.
Permissions and Ownership
12. chmod — change permissions
chmod 755 deploy.sh # rwxr-xr-x (script can be executed)
chmod 600 .env # rw------- (owner read/write only)
chmod +x script.sh # add execute permission
The numbers: 4=read, 2=write, 1=execute. Three digits map to owner/group/others. 7=4+2+1 (all), 5=4+1 (read+execute), 0 (none).
13. chown — change ownership
chown -R www-data:www-data /var/www/ # change owner recursively
Mismatched file ownership is a common cause of 403 errors in web server deployments.
Process Management
14. ps — list processes
ps aux # all running processes
ps aux | grep node # filter for node processes
15. kill — terminate processes
kill 12345 # terminate by PID (SIGTERM)
kill -9 12345 # force kill (SIGKILL)
Port conflict preventing your server from starting? Find and kill the offending process:
lsof -i :3000 # what's using port 3000?
kill $(lsof -t -i :3000) # kill it
16. top / htop — system monitoring
top # built-in monitor
htop # nicer version (needs installing)
Real-time CPU and memory usage. First tool to reach for when a server slows down.
17. nohup / & — background execution
nohup node server.js > output.log 2>&1 &
Keeps a process running after you disconnect from SSH. For production, use PM2 or systemd. But nohup works in a pinch.
Networking
18. curl — HTTP requests
curl https://api.example.com/data # GET request
curl -X POST -d '{"key":"value"}' -H "Content-Type: application/json" URL
curl -o file.zip https://example.com/file # download a file
curl -I https://example.com # headers only
Faster than opening Postman for quick API tests. Add -v to see full request/response headers for debugging.
19. wget — file downloads
wget https://example.com/file.tar.gz
wget -r -l 1 https://example.com/docs/ # recursive download
20. ss (or netstat) — network connections
ss -tlnp # listening TCP ports and their processes
"What's running on this port?" — that's what ss answers. It replaced netstat and is faster.
21. ping — connectivity check
ping -c 4 google.com # send 4 pings
The most basic "is the network working?" test between two machines.
Disk and System
22. df — disk usage
df -h # partition usage, human-readable
A full disk kills applications. Check this regularly.
23. du — directory sizes
du -sh * # size of each item in current directory
du -sh node_modules/ # size of a specific directory
du -h --max-depth=1 | sort -rh # largest directories first
When disk is running low, this is how you find what's eating the space.
24. tar — compress/extract
tar -czf backup.tar.gz src/ # compress
tar -xzf backup.tar.gz # extract
tar -tzf backup.tar.gz # list contents without extracting
Flags: -c create, -x extract, -z gzip, -f file. "czf to pack, xzf to unpack."
Text Processing
25. sed — stream editor
sed -i 's/old/new/g' file.txt # find-and-replace in file
sed -n '10,20p' file.txt # print lines 10–20
26. awk — field-based text processing
awk '{print $1}' access.log # first field (usually IP)
awk -F: '{print $1}' /etc/passwd # custom delimiter
docker ps | awk '{print $1, $2}' # container ID and image
sed and awk go deep, but the patterns above cover most real-world needs.
Miscellaneous Essentials
27. xargs — pass piped output as arguments
find . -name "*.log" | xargs rm # delete all found files
git branch --merged | grep -v main | xargs git branch -d
28. watch — repeat a command
watch -n 2 "docker ps" # check container status every 2 seconds
29. history — command history
history | grep "docker" # find previous docker commands
!234 # re-run command #234
30. alias — create shortcuts
alias ll='ls -la'
alias gs='git status'
alias dc='docker compose'
Add these to ~/.bashrc or ~/.zshrc to make them permanent.
Pipes and Redirects — The Real Superpower
Individual commands are useful. Combining them is where Linux gets powerful.
# pipe (|) — feed one command's output to the next
cat access.log | grep "POST" | wc -l
# redirect (>) — send output to a file
echo "hello" > file.txt # overwrite
echo "world" >> file.txt # append
# error redirection
command 2>/dev/null # suppress error messages
command > output.log 2>&1 # stdout + stderr to one file
You don't need to memorize all 30 at once. The common ones stick through repetition, and the rest are here when you need them. What matters is knowing that something is possible — once you know that, finding the right syntax is a quick search away.