Practical Roadmap (Recon → Scan → Report)
1) Preparation / Installation (Kali)
# update
sudo apt update && sudo apt upgrade -y
# useful tools
sudo apt install -y git jq unzip
# amass
sudo apt install -y amass
# subfinder (go)
go install -v github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
# httpx (ProjectDiscovery)
go install -v github.com/projectdiscovery/httpx/cmd/httpx@latest
# nuclei
go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest
nuclei -update-templates
# optional: gau, assetfinder
go install -v github.com/lc/gau/v2/cmd/gau@latest
go install -v github.com/tomnomnom/assetfinder@latest
Note: adjust
$GOPATH/binin yourPATH(e.g.export PATH=$PATH:$(go env GOPATH)/bin).
2) Collect targets (subdomains)
Example combining subfinder + amass + assetfinder:
mkdir -p recon && cd recon
# subfinder
subfinder -d example.com -o subfinder.txt
# amass (passive)
amass enum -passive -d example.com -o amass.txt
# assetfinder
assetfinder example.com > assetfinder.txt
# join + deduplicate
cat subfinder.txt amass.txt assetfinder.txt | sort -u > all_subs.txt
3) Check which are online (httpx)
# check http/https, get title, status, ip, port
cat all_subs.txt | httpx -threads 50 -ports 80,443,8080,8443 -silent -status-code -title -ip -o alive.txt
Useful options:
-mc 200,301,302to filter specific status codes.-status-code,-title,-ip,-locationfor more info.-timeout 10to adjust timeout.
4) Scanning with Nuclei
Basic:
cat alive.txt | nuclei -l - -t /path/to/nuclei-templates/ -c 50 -o nuclei_results.txt
Filtering by tags/severity:
cat alive.txt | nuclei -l - -tags cve,exposed -severity high,critical -c 50 -o nuclei_high.json -json
Tips:
- Use
-cfor concurrency. -rlfor rate limit (e.g.-rl 25).-timeout 15sif you have network issues.-proxy http://127.0.0.1:8080if you want to MITM with Burp.
5) Triage / manual validation
- Open
nuclei_results.txt/JSON, filter byseverityandconfidence. - Reproduce manually via curl/Burp.
- Mark false positives.
- Collect evidence (requests, responses, screenshots).
Example reproduction with curl:
curl -i -k "https://vulnerable.example.com/endpoint?param=payload"
6) Evidence collection
- Save requests/responses (Burp, ZAP).
- Screenshots:
chromium --headless --screenshot=screen.png https://target. - Keep the Nuclei JSON (
-json) and prints/requests to attach to the report. - Note URL, template used, timestamp, and reproduction steps.
7) Quick report structure (minimum)
- Executive summary (1 paragraph)
- Scope and methodology (tools + dates)
- Findings — each item:
- Vulnerability title
- Severity (Low/Medium/High/Critical)
- Evidence (URL, request, response, image)
- Steps to reproduce (step-by-step)
- Impact
- Recommendation
- References (CVE/CWE)
- Conclusion
- Attachments (Nuclei JSON, screenshots, logs)
Automated pipeline (bash script)
File recon_nuclei_pipeline.sh:
#!/usr/bin/env bash
set -euo pipefail
DOMAIN="$1"
OUTDIR="scan_${DOMAIN}_$(date +%Y%m%d_%H%M)"
mkdir -p "$OUTDIR"
# 1 - subdomains
subfinder -d "$DOMAIN" -o "$OUTDIR/subfinder.txt"
amass enum -passive -d "$DOMAIN" -o "$OUTDIR/amass.txt"
cat "$OUTDIR/subfinder.txt" "$OUTDIR/amass.txt" | sort -u > "$OUTDIR/all_subs.txt"
# 2 - alive
cat "$OUTDIR/all_subs.txt" | httpx -threads 50 -ports 80,443,8080,8443 -silent -status-code -title -ip -o "$OUTDIR/alive.txt"
# 3 - nuclei
cat "$OUTDIR/alive.txt" | nuclei -l - -t ~/.local/share/nuclei-templates/ -c 50 -json -o "$OUTDIR/nuclei_results.json"
echo "Results in $OUTDIR"
Run:
bash recon_nuclei_pipeline.sh example.com
Custom templates — advanced section
Important concepts
requests: one or more requests (GET/POST)matchers: types:status,word,regex,dork,size,bytes,word-diff,network,binary,json, etc.extractors: extract parts of the body (regex, xpath, json)redirects,redirects: truewhen you want to follow redirectspayloads: for fuzzing and brute forcevariables:id,author,severity,tags,description,referencesattackmodes:fuzzingwithpayloads
Example: complex template (fuzz + chained request + extractor)
Save as magento-sso-check.yaml (illustrative example):
id: custom-magento-sso-open
info:
name: Magento SSO / exposed login token
author: your_name
severity: high
tags: magento, sso, token, cve
requests:
- method: GET
path:
- "{{BaseURL}}/sso/authorize?client_id={{client}}"
variables:
client:
- "admin"
- "magento_admin"
- "default"
matchers-condition: and
matchers:
- type: word
words:
- "token"
- "access_token"
part: body
extractors:
- type: regex
regex:
- "(access_token=|\"access_token\":\")([A-Za-z0-9\-_\.]+)"
part: body
- method: GET
path:
- "{{BaseURL}}/api/userinfo"
headers:
Authorization: "Bearer {{Extracted[0]}}"
stop-at-first-match: true
matchers:
- type: status
status:
- 200
- type: word
words:
- "email"
- "id"
part: body
Explanation:
- The first request tries SSO endpoints with
clientvariable payloads. - If something containing
tokenis returned, it is extracted via regex. - The second request uses the extracted token to access
/api/userinfoand checks whether data can be obtained (indicating a weak/exposed token).
Testing templates locally
nuclei -t magento-sso-check.yaml -u https://target.example.com -debug
-debugshows full requests/responses.
Best practices for templates
- Unique ID and
author. severityconsistent with real impact.- Use
matchers-condition(and/or) to avoid false positives. - Prefer combining
regexandstatusrather than onlyword. - Include
references(CVE docs, OWASP) when applicable. - Use
stop-at-first-matchfor performance when doing sequential checks. - Avoid overly aggressive payloads by default — offer
-tagsto run heavy checks.
Strategies to reduce false positives
- Combine
status+word+regex. - Use
content-length/sizeconstraints when a valid response has a large body. - Use
extractorsto validate tokens/IDs in expected formats. - Try secondary requests to confirm (chained requests).
- Test in a controlled environment before running in production.
CI integration — GitHub Actions example
File .github/workflows/nuclei-scan.yml:
name: Nuclei Scan
on:
workflow_dispatch:
schedule:
- cron: '0 3 * * *' # optional
jobs:
nuclei:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Install nuclei
run: |
curl -sL https://github.com/projectdiscovery/nuclei/releases/latest/download/nuclei-linux-amd64.zip -o nuclei.zip
unzip nuclei.zip && sudo mv nuclei /usr/local/bin/
- name: Update templates
run: nuclei -update-templates
- name: Run scan
run: |
echo "targets" > targets.txt
nuclei -l targets.txt -t ~/.nuclei-templates/ -c 25 -json -o nuclei_results.json
- name: Upload artifact
uses: actions/upload-artifact@v4
with:
name: nuclei-results
path: nuclei_results.json
Warning: running automated scans against production targets requires authorization and care — respect engagement scope and rules.
Authenticated scanning & scanning with credentials
- Use
headersin the template (Authorization: Bearer ...) or pass-Hon the command line. - For cookies/sessions, use
-cookiesor-H "Cookie: ...". - For OAuth/SSO flows, build templates chaining requests (login -> extract cookie -> access endpoint).
- Test with test accounts.
Final tips and checklist before running
- Do you have written authorization / defined scope? (mandatory)
- Updated templates?
nuclei -update-templates - Tested in staging when possible?
- Limited
-cand-rlto avoid taking down services - Saved JSON output and collected screenshots/requests for the report
- Performed manual validation of critical findings
