L2 Support Engineer · Fintech · Week 3
Week 3
Day 3
Day 4
Week 3 · Day 3 & Day 4
Log Parsing &
Cron Jobs
Day 3 teaches you to extract exactly what you need from logs using powerful command combinations. Day 4 teaches you to schedule tasks so they run automatically without you being there.
Day 3 — Log Parsing
Day 4 — Cron Jobs
grep
awk
pipes
crontab
Day 3
Log Parsing
01 The Simple Idea
Real-life Analogy
Think of a log file like a very long CCTV recording. You don't watch 8 hours of footage to find one incident. You fast-forward, filter by time, zoom in.
grep, awk, and pipes are your fast-forward and zoom tools — they let you cut through millions of log lines and extract exactly the one piece of information you need in seconds.
02 Commands — grep, awk & Pipes
Used for: Finding lines that match a pattern. Combine with flags for powerful filtering.
grep — useful flags
# Count ERROR lines
grep -c "ERROR" payment.log
# Show line number of each ERROR
grep -n "ERROR" payment.log
# Show 3 lines BEFORE the error (context)
grep -B 3 "ERROR" payment.log
# Show 3 lines AFTER the error (context)
grep -A 3 "ERROR" payment.log
# Search for multiple patterns at once
grep -E "ERROR|WARN|TIMEOUT" payment.log
# Search recursively in all files in a folder
grep -r "DB_TIMEOUT" /var/logs/
💡 L2 daily use: grep -c "ERROR" payment.log gives you the exact error count for a client report in one second.
What it does: awk treats each log line like a table row and lets you pick specific columns (fields) from it. By default, fields are separated by spaces. $1 = first word, $2 = second word, and so on.
awk examples
# Sample log line:
# [2024-03-15 14:02:05] ERROR DB_TIMEOUT TXN-9823
# $1=date $2=time $3=level $4=error $5=txn_id
# Print only the timestamp (field 1 and 2)
awk '{print $1, $2}' payment.log
# Print only the error type (field 4)
awk '{print $4}' payment.log
# Print lines where field 3 equals ERROR
awk '$3 == "ERROR" {print $0}' payment.log
# Custom separator — for CSV files
awk -F',' '{print $2}' report.csv
💡 L2 use: Extract just the TXN IDs of all failed transactions from a log — no manual copy-pasting. Send the list straight to the client.
What it does: The pipe | connects two commands — the output of the first becomes the input of the second. This is how you build powerful one-line commands that do multiple things at once.
pipe combinations
# Watch live log but only show ERROR lines
tail -f payment.log | grep "ERROR"
# Count how many unique error types appear
grep "ERROR" payment.log | awk '{print $4}' | sort | uniq -c
# Get top 5 most repeated errors
grep "ERROR" payment.log | awk '{print $4}' | sort | uniq -c | sort -rn | head -5
# Extract disk % and check if above 80
df / | awk 'NR==2 {print $5}' | tr -d '%'
| Command | What it does in the chain |
| grep "ERROR" payment.log | Get all lines containing ERROR |
| | awk '{print $4}' | From those lines, extract only the 4th field (error code) |
| | sort | Sort the error codes alphabetically so duplicates are together |
| | uniq -c | Count how many times each unique error code appears |
| | sort -rn | Sort by count — highest first |
| | head -5 | Show only the top 5 most frequent errors |
💡 This single pipe chain tells you which 5 errors are causing the most failures — the most useful command you'll run after an outage.
03 Day 3 Lab — Extract ERROR Counts
Create a sample log file to parse
Use the log file you created in Week 2 Day 2, or create a fresh one.
terminal
cat ~/payment-service.log # view it first
Count total ERROR lines
Get the exact number of errors in the log.
terminal
grep -c "ERROR" ~/payment-service.log
→ Expected output: 5
Extract unique error types and their count
Use the full pipe chain to get a ranked error summary.
terminal
grep "ERROR" ~/payment-service.log | awk '{print $4}' | sort | uniq -c | sort -rn
→ Expected: count + error type, sorted highest first
Save the error report to a file
Redirect the output into a report file you can share.
terminal
grep "ERROR" ~/payment-service.log | awk '{print $4}' | sort | uniq -c | sort -rn > ~/error-report.txt
cat ~/error-report.txt
→ error-report.txt created — contains ranked error summary ✅
Find errors with context — 2 lines before and after
See what happened just before and after each error.
terminal
grep -B 2 -A 2 "ERROR" ~/payment-service.log
→ Shows each ERROR with 2 lines of context — reveals the build-up ✅
Day 4
Cron Jobs
04 The Simple Idea
Real-life Analogy
Think of a bank's scheduled reports. Every morning at 9 AM, the system automatically generates yesterday's transaction summary and emails it to management. Nobody manually triggers it — it's scheduled.
Cron jobs are exactly that — you tell Linux "run this script every day at 8 AM" and it does it automatically forever, even when you're asleep.
05 Crontab — How It Works
What is Crontab?
Crontab is Linux's built-in task scheduler. You give it a time pattern + a command and it runs that command automatically at the right time. Every scheduled job is called a cron job.
To open and edit your cron schedule, you run: crontab -e
Command
/path/script.sh
full path
| Cron Expression | When it runs |
| 0 8 * * * | Every day at 8:00 AM |
| 0 8 * * 1 | Every Monday at 8:00 AM |
| */30 * * * * | Every 30 minutes |
| 0 0 * * * | Every day at midnight |
| 0 8,17 * * * | Every day at 8 AM and 5 PM |
| 0 8 1 * * | 1st of every month at 8 AM |
crontab commands
# Open crontab editor to add/edit jobs
crontab -e
# List all current cron jobs
crontab -l
# Remove all cron jobs (careful!)
crontab -r
example cron jobs for L2 use
# Run health check every day at 8 AM — save output to log
0 8 * * * /home/kali/health-check.sh >> /home/kali/daily-report.txt
# Run error count script every hour
0 * * * * /home/kali/error-count.sh
# Run disk check every 30 minutes
*/30 * * * * /home/kali/disk-check.sh
# Run log cleanup every Sunday at midnight
0 0 * * 0 /home/kali/cleanup-logs.sh
⚠️ Important: Always use the full path to your script in crontab — not ./script.sh but /home/kali/script.sh. Cron doesn't know your current directory.
06 Day 4 Lab — Schedule Daily Health Check
Create a log parser script to schedule
This script counts errors and saves the result — this is what cron will run.
error-count.sh
#!/bin/bash
LOG="/home/kali/payment-service.log"
OUT="/home/kali/daily-error-report.txt"
echo "=== Error Report: $(date) ===" >> $OUT
grep -c "ERROR" $LOG >> $OUT
grep "ERROR" $LOG | awk '{print $4}' | sort | uniq -c >> $OUT
Give it permission and test it manually first
Always test a script manually before scheduling it.
terminal
chmod +x /home/kali/error-count.sh
./error-count.sh
cat /home/kali/daily-error-report.txt
→ Report file created with timestamp and error counts ✅
Open crontab and schedule it
Add two cron jobs — one for daily health check, one for hourly error count.
crontab -e
# Daily health check at 8 AM
0 8 * * * /home/kali/health-check.sh >> /home/kali/health-report.txt
# Error count every hour
0 * * * * /home/kali/error-count.sh
Confirm the cron jobs are scheduled
List all active cron jobs to confirm they were saved correctly.
→ Both jobs listed — health check at 8 AM, error count hourly ✅
07 Quick Cheat Sheet — Day 3 & 4
grep -c "ERR" fileCount matching lines
grep -n "ERR" fileShow line numbers with matches
grep -B 2 -A 2 "ERR"Show 2 lines before and after match
grep -E "ERR|WARN"Match multiple patterns at once
awk '{print $2}'Print the 2nd field of each line
awk -F',' '{print $1}'Use comma as field separator (CSV)
sort | uniq -cSort and count duplicates
sort -rn | head -5Top 5 highest count items
cmd > file.txtSave output to file (overwrites)
cmd >> file.txtAppend output to file (adds to end)
crontab -eOpen cron editor to add/edit jobs
crontab -lList all current cron jobs
08 Real L2 Scenarios
01
After an outage, manager asks: "What were the top 3 errors?" — You run the full pipe chain in 5 seconds: grep "ERROR" payment.log | awk '{print $4}' | sort | uniq -c | sort -rn | head -3 — done.
02
Client says: "Can you send us the error summary every morning?" — You schedule error-count.sh with crontab at 8 AM. It runs automatically every day, saves to a file, and you forward it. No manual work.
03
You need to find all logs that contain a specific TXN ID across 20 files: grep -r "TXN-9823" /var/logs/ — searches all files recursively in one command.
04
Disk is filling up because old logs are never deleted. You write a cleanup script and schedule it with crontab: every Sunday at midnight — 0 0 * * 0 — it runs automatically and keeps disk healthy without manual intervention.
✅ Week 3 · Day 3 & 4 Outcomes
- Use grep with advanced flags — -c, -n, -B, -A, -E, -r — for precise log filtering
- Use awk to extract specific fields from log lines by column number
- Build pipe chains combining grep + awk + sort + uniq to generate error summaries
- Redirect output to files using > and >> for saving reports
- Complete Day 3 lab — extract error counts and save a ranked error report from a log file
- Read and write crontab syntax — understand all 5 time fields
- Schedule scripts to run daily, hourly, weekly, or on a custom interval
- Complete Day 4 lab — schedule health check and error count scripts using crontab
- Verify scheduled cron jobs are active using crontab -l