Networking & CLI Tools
HTTP Tools: curl and wget
curl — Transfer Data with URLs
curl is the Swiss Army knife of HTTP tools. It supports HTTP, HTTPS, FTP, and dozens of other protocols.
# Basic GET requestcurl https://api.example.com/users
# Verbose output (see headers, TLS handshake)curl -v https://api.example.com/users
# Show response headers onlycurl -I https://api.example.com/users
# Show response headers AND bodycurl -i https://api.example.com/users
# POST with JSON bodycurl -X POST https://api.example.com/users \ -H "Content-Type: application/json" \ -H "Authorization: Bearer TOKEN" \ -d '{"name": "Alice", "email": "alice@example.com"}'
# PUT requestcurl -X PUT https://api.example.com/users/123 \ -H "Content-Type: application/json" \ -d '{"name": "Alice Updated"}'
# DELETE requestcurl -X DELETE https://api.example.com/users/123
# Upload a filecurl -F "file=@report.pdf" https://api.example.com/upload
# Follow redirectscurl -L https://short.url/abc
# Save output to filecurl -o output.html https://example.comcurl -O https://example.com/file.tar.gz # Keep original name
# Set timeoutcurl --connect-timeout 5 --max-time 30 https://api.example.com
# Silent mode (no progress bar) + fail on HTTP errorscurl -sf https://api.example.com/health
# Send data from a filecurl -X POST https://api.example.com/data \ -H "Content-Type: application/json" \ -d @payload.json
# Basic authenticationcurl -u username:password https://api.example.com/secure
# Custom headerscurl -H "X-Request-ID: abc123" \ -H "Accept: application/json" \ https://api.example.com/data
# Pipe JSON output through jq for formattingcurl -s https://api.example.com/users | jq '.'wget — Download Files
# Download a filewget https://example.com/file.tar.gz
# Download to specific filenamewget -O myfile.tar.gz https://example.com/file.tar.gz
# Resume interrupted downloadwget -c https://example.com/large-file.iso
# Download recursively (mirror a website)wget -r -l 2 --no-parent https://docs.example.com/
# Download in backgroundwget -b https://example.com/large-file.iso
# Quiet modewget -q https://example.com/file.tar.gz
# Rate limit (100KB/s)wget --limit-rate=100k https://example.com/large-file.isoNetwork Diagnostics
netstat / ss — Network Connections
ss (socket statistics) is the modern replacement for netstat.
# Show all listening portsss -tlnp# -t = TCP, -l = listening, -n = numeric, -p = process
# Show all connectionsss -tanp
# Show UDP connectionsss -uanp
# Find what's using a specific portss -tlnp | grep :8080# orss -tlnp 'sport = :8080'
# Show connection statisticsss -s
# Legacy netstat (still common)netstat -tlnp # Listening TCP portsnetstat -an # All connectionsnetstat -rn # Routing tabledig / nslookup — DNS Queries
# Look up A record (IP address)dig example.com# ordig example.com A
# Short answer onlydig +short example.com
# Look up specific record typesdig example.com MX # Mail serversdig example.com TXT # Text records (SPF, DKIM)dig example.com NS # Name serversdig example.com CNAME # Canonical namedig example.com AAAA # IPv6 address
# Use a specific DNS serverdig @8.8.8.8 example.com
# Reverse DNS lookupdig -x 93.184.216.34
# Trace DNS resolution pathdig +trace example.com
# nslookup (simpler alternative)nslookup example.comnslookup -type=MX example.comtraceroute — Network Path
# Trace the route to a hosttraceroute example.com
# Use TCP instead of UDP (better through firewalls)traceroute -T example.com
# Use ICMPtraceroute -I example.com
# mtr (combines traceroute + ping -- continuous)mtr example.com# Shows packet loss and latency at each hop
# ping -- test connectivityping -c 5 example.com # 5 packetsping -i 0.5 example.com # 0.5s intervaliptables / nftables — Firewalling
iptables Basics
# View current rulessudo iptables -L -n -v
# Allow incoming SSHsudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
# Allow incoming HTTP and HTTPSsudo iptables -A INPUT -p tcp --dport 80 -j ACCEPTsudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT
# Allow established connectionssudo iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allow loopbacksudo iptables -A INPUT -i lo -j ACCEPT
# Block everything else (default deny)sudo iptables -P INPUT DROP
# Block a specific IPsudo iptables -A INPUT -s 192.168.1.100 -j DROP
# Rate limit SSH connections (prevent brute force)sudo iptables -A INPUT -p tcp --dport 22 \ -m recent --set --name SSHsudo iptables -A INPUT -p tcp --dport 22 \ -m recent --update --seconds 60 --hitcount 4 \ --name SSH -j DROP
# Save rules (persist across reboots)sudo iptables-save > /etc/iptables/rules.v4
# Delete a rule (by line number)sudo iptables -L --line-numberssudo iptables -D INPUT 3
# Flush all rules (reset)sudo iptables -F iptables Chain Flow:
Incoming Packet │ ▼ ┌─────────┐ ┌──────────┐ ┌─────────┐ │ INPUT │ │ FORWARD │ │ OUTPUT │ │ chain │ │ chain │ │ chain │ └────┬────┘ └──────────┘ └─────────┘ │ Rule 1: Allow SSH? ──── Match ──▶ ACCEPT │ No match Rule 2: Allow HTTP? ─── Match ──▶ ACCEPT │ No match Rule 3: Allow HTTPS? ── Match ──▶ ACCEPT │ No match Default policy ─────────────────▶ DROPSSH — Secure Shell
SSH is the primary tool for remote access to Linux systems.
SSH Key Management
# Generate SSH key pairssh-keygen -t ed25519 -C "alice@example.com"# Creates: ~/.ssh/id_ed25519 (private) and# ~/.ssh/id_ed25519.pub (public)
# For legacy systems that need RSAssh-keygen -t rsa -b 4096 -C "alice@example.com"
# Copy public key to remote serverssh-copy-id -i ~/.ssh/id_ed25519.pub user@server
# Connect to a remote serverssh user@server.example.comssh -p 2222 user@server.example.com # Custom portSSH Config File
# Simplifies SSH connections with aliases
Host prod HostName 10.0.1.50 User deploy Port 22 IdentityFile ~/.ssh/id_ed25519_prod ForwardAgent yes
Host staging HostName 10.0.2.50 User deploy IdentityFile ~/.ssh/id_ed25519_staging
Host bastion HostName bastion.example.com User alice IdentityFile ~/.ssh/id_ed25519
# Connect through bastion (jump host)Host internal-*.example.com ProxyJump bastion User alice
# Wildcard for all connectionsHost * ServerAliveInterval 60 ServerAliveCountMax 3 AddKeysToAgent yes IdentitiesOnly yes# Now you can just type:ssh prod # Instead of: ssh -i ~/.ssh/id_ed25519_prod deploy@10.0.1.50ssh staging # Instead of: ssh -i ~/.ssh/id_ed25519_staging deploy@10.0.2.50SSH Tunneling
# Local port forwarding# Access remote service through local portssh -L 8080:localhost:80 user@server# Now http://localhost:8080 → server:80
# Access database through SSH tunnelssh -L 5432:db-server:5432 user@bastion# Now psql -h localhost -p 5432 → connects to db-server:5432
# Remote port forwarding# Expose local service on remote serverssh -R 8080:localhost:3000 user@server# Now server:8080 → your localhost:3000
# Dynamic port forwarding (SOCKS proxy)ssh -D 1080 user@server# Configure browser to use SOCKS5 proxy at localhost:1080
# Run a command remotelyssh user@server "df -h && free -h"
# SCP: copy files over SSHscp local_file.txt user@server:/path/to/destination/scp user@server:/path/to/file.txt ./local_copy.txtscp -r local_dir/ user@server:/path/to/destination/
# rsync: efficient file sync over SSHrsync -avz --progress local_dir/ user@server:/path/to/dest/rsync -avz --delete source/ dest/ # Mirror (delete extra files)tmux — Terminal Multiplexer
tmux lets you create multiple terminal sessions within a single window, detach from them, and reattach later. Essential for remote server work.
tmux Layout:
┌──────────────────────────────────────────────────────────┐ │ Session: dev-server │ │ │ │ ┌─────────── Window 0: editor ──────────────────────────┐│ │ │ ││ │ │ ┌── Pane 0 ──────────────┐ ┌── Pane 1 ──────────────┐││ │ │ │ │ │ │││ │ │ │ vim app.py │ │ tail -f app.log │││ │ │ │ │ │ │││ │ │ │ │ │ │││ │ │ │ │ │ │││ │ │ └────────────────────────┘ └──────────────────────────┘││ │ │ ││ │ │ ┌── Pane 2 ──────────────────────────────────────────┐││ │ │ │ $ python test.py │││ │ │ └─────────────────────────────────────────────────────┘││ │ └────────────────────────────────────────────────────────┘│ │ │ │ [dev-server] 0:editor* 1:build 2:monitoring │ └──────────────────────────────────────────────────────────┘
Hierarchy: Session → Windows → Panes# Start a new sessiontmux new -s dev-server
# Detach from session (keeps it running)# Press: Ctrl+b, then d
# List sessionstmux ls
# Reattach to a sessiontmux attach -t dev-server
# Key bindings (prefix = Ctrl+b):# Ctrl+b c Create new window# Ctrl+b n Next window# Ctrl+b p Previous window# Ctrl+b 0-9 Switch to window by number# Ctrl+b % Split pane vertically# Ctrl+b " Split pane horizontally# Ctrl+b o Switch to next pane# Ctrl+b x Close current pane# Ctrl+b d Detach from session# Ctrl+b [ Enter scroll/copy mode (q to exit)# Ctrl+b z Toggle pane zoom (fullscreen)# Ctrl+b , Rename current window
# Kill a sessiontmux kill-session -t dev-serverEssential CLI Tools
jq — JSON Processor
# Pretty-print JSONecho '{"name":"Alice","age":30}' | jq '.'
# Extract a fieldcurl -s https://api.example.com/user/1 | jq '.name'
# Extract nested fieldsecho '{"user":{"name":"Alice","address":{"city":"NYC"}}}' \ | jq '.user.address.city'
# Filter arraysecho '[{"name":"Alice","age":30},{"name":"Bob","age":25}]' \ | jq '.[] | select(.age > 27)'
# Transform dataecho '[{"name":"Alice","age":30},{"name":"Bob","age":25}]' \ | jq '[.[] | {person: .name, years: .age}]'
# Get array lengthecho '[1,2,3,4,5]' | jq 'length'
# Extract specific fields from API responsecurl -s https://api.github.com/repos/torvalds/linux \ | jq '{name: .name, stars: .stargazers_count, language: .language}'
# Process JSONL (one JSON object per line)cat events.jsonl | jq -c 'select(.level == "error")'awk — Text Processing
# Print specific columnsecho "Alice 30 Engineer" | awk '{print $1, $3}'# Output: Alice Engineer
# Process CSV-like dataecho "Alice,30,Engineer" | awk -F',' '{print $1, $3}'# Output: Alice Engineer
# Sum a columnawk '{sum += $2} END {print "Total:", sum}' data.txt
# Filter and process log files# Print URLs with 500 statusawk '$9 == 500 {print $7}' access.log
# Count requests by status codeawk '{count[$9]++} END {for (c in count) print c, count[c]}' access.log
# Average response time (assume column 10 is time)awk '{sum += $10; n++} END {print "Avg:", sum/n}' access.log
# Print lines longer than 100 charactersawk 'length > 100' file.txt
# Replace a fieldawk -F: 'BEGIN {OFS=":"} {$7="/bin/bash"; print}' /etc/passwdsed — Stream Editor
# Find and replace (first occurrence per line)sed 's/old/new/' file.txt
# Find and replace (all occurrences)sed 's/old/new/g' file.txt
# In-place editing (modify the file)sed -i 's/old/new/g' file.txt
# In-place with backupsed -i.bak 's/old/new/g' file.txt
# Delete lines matching a patternsed '/^#/d' config.txt # Delete commentssed '/^$/d' file.txt # Delete empty lines
# Print only matching lines (like grep)sed -n '/error/p' log.txt
# Insert a line before a matchsed '/\[database\]/i # Database configuration' config.ini
# Insert a line after a matchsed '/\[database\]/a host=localhost' config.ini
# Replace in a range of linessed '10,20s/foo/bar/g' file.txt # Lines 10-20
# Multiple operationssed -e 's/foo/bar/g' -e 's/baz/qux/g' file.txtxargs — Build and Execute Commands
# Basic: convert stdin to argumentsecho "file1.txt file2.txt file3.txt" | xargs rm
# Process each line separatelyfind . -name "*.log" | xargs rm
# With placeholderfind . -name "*.py" | xargs -I{} cp {} /backup/
# Parallel executionfind . -name "*.jpg" | xargs -P 4 -I{} convert {} -resize 50% {}
# Limit number of arguments per commandecho {1..100} | xargs -n 10 echo# Runs echo with 10 args at a time
# Confirm before each executionfind . -name "*.tmp" | xargs -p rm
# Handle filenames with spacesfind . -name "*.log" -print0 | xargs -0 rm
# Count lines in all Python filesfind . -name "*.py" | xargs wc -l
# Grep in all JavaScript filesfind . -name "*.js" | xargs grep "console.log"Other Useful Tools
# watch -- execute a command repeatedlywatch -n 2 "df -h" # Update every 2 secondswatch -d "ps aux | head" # Highlight changes
# tee -- write to stdout AND a filecommand | tee output.log # See and save outputcommand | tee -a output.log # Append mode
# sort and uniqsort file.txt # Sort lines alphabeticallysort -n file.txt # Sort numericallysort -k2 file.txt # Sort by second columnsort file.txt | uniq # Remove duplicatessort file.txt | uniq -c # Count occurrences
# cut -- extract columnscut -d',' -f1,3 data.csv # Fields 1 and 3 from CSVcut -c1-10 file.txt # First 10 characters
# tr -- translate charactersecho "HELLO" | tr 'A-Z' 'a-z' # lowercaseecho "hello world" | tr ' ' '_' # Replace spacescat file.txt | tr -d '\r' # Remove carriage returns
# wc -- word/line/char countwc -l file.txt # Line countwc -w file.txt # Word countwc -c file.txt # Byte countfind . -name "*.py" | wc -l # Count Python filesSummary
| Tool | Purpose | Key Usage |
|---|---|---|
| curl | HTTP requests | API testing, downloading, debugging |
| wget | File downloads | Mirroring, batch downloads |
| ss/netstat | Network connections | Find open ports, debug connectivity |
| dig | DNS queries | Troubleshoot DNS issues |
| traceroute | Network path | Find network bottlenecks |
| iptables | Firewall rules | Secure servers, port management |
| SSH | Remote access | Keys, tunneling, config, SCP/rsync |
| tmux | Terminal multiplexer | Persistent sessions, split panes |
| jq | JSON processing | Parse API responses, filter data |
| awk | Text processing | Column extraction, aggregation |
| sed | Stream editing | Find-replace, line manipulation |
| xargs | Command building | Process stdin as arguments |
Linux & CLI Overview Return to the section overview
Process Management Review process management and systemd