The Handoff Problem: Writing Bug Reports That Survive a Triage Engineer's First 90 Seconds
This content is provided for **EDUCATIONAL** and **AUTHORIZED SECURITY TESTING** purposes only.
Available in Português
The Handoff Problem: Writing Bug Reports That Survive a Triage Engineer's First 90 Seconds
Legal & Ethical Disclaimer
This content is provided for EDUCATIONAL and AUTHORIZED SECURITY TESTING purposes only.
- •Use these techniques on systems you own or have explicit written permission to test
- •Practice in authorized lab environments (VulnHub, HackTheBox, DVWA, etc.)
- •Follow responsible disclosure practices when finding vulnerabilities
- •Use knowledge for defensive security and authorized penetration testing
- •Access systems without explicit authorization
- •Use these techniques for malicious purposes
- •Deploy exploits against production systems you don't own
- •Share working exploits for unpatched vulnerabilities
Legal warning
Unauthorized access to computer systems is illegal in most jurisdictions (e.g. CFAA in the US, Computer Misuse Act in the UK). Violators may face criminal prosecution and civil liability. The author and publisher assume no liability for misuse of this information. By continuing, you agree to use this knowledge ethically and legally.
Hook & Context
You found something real. You spent hours — maybe days — confirming it, testing edge cases, checking that it isn't a known informational or a duplicate. You open the report editor, describe what you found, paste the request, add a screenshot, and submit. Three days later: Informative. Thanks for your submission. Or worse, Not Applicable. Out of scope.
The vulnerability didn't fail. The report did.
This is the handoff problem. At the moment you click submit, you exit the picture entirely. A triage engineer — often working through a queue of dozens of reports — picks up your work and has to reconstruct your mental model of the bug from a cold start. They've never seen your session. They don't know the edge case you noticed. They can't feel the "ohh, this is bad" moment you had. All they have is text, screenshots, and whatever reproduction steps you bothered to include. The gap between a valid bug and a paid bug is, shockingly often, the report itself. Not the exploit. The document.
This piece reverse-engineers the triage engineer's workflow and teaches you to write reports that survive the first 90 seconds of scrutiny — reports that get triaged fast, escalated appropriately, and paid at the severity they deserve.
TL;DR
🎯 A good bug report solves the triage engineer's job for them. Lead with impact, not mechanics. Give a one-sentence summary, a clear reproduction path, a precise severity justification, and a minimal PoC — not a novel. Structure reduces cognitive load. Cognitive load reduction means faster triage, higher trust, and better outcomes for your bounty.
Foundations & Theory
Why Triage Is Hard
Before you can write for a triage engineer, you need to understand what that role actually is. Triage on a mature bug bounty program is a filtering job operating under constant time pressure. A single triager on a well-known program might process anywhere from 20 to 80 reports on a busy day, across radically different vulnerability classes, asset types, and reporter skill levels. The majority of incoming reports are noise: duplicates, out-of-scope submissions, misunderstood behavior, or self-XSS with no meaningful impact.
This context shapes a triage engineer's reading behavior. They are not reading your report with fresh curiosity — they are scanning for disqualifying signals first. If something looks like a low-effort submission in the first paragraph, cognitive resources get rationed. The report gets a quick judgment call. This isn't malice. It's triage, literally — the medical term for sorting patients by urgency when resources are constrained.
The Mental Model Gap
When you found the bug, you built a complete mental model: the endpoint, the parameter, the behavior, the affected user flow, the worst-case scenario, and the chain of reasoning that connects them. That model lives entirely in your head. Your report is the lossy compression of that model into text.
The triage engineer has to reconstruct your mental model from that compressed output — often while half their attention is on the next item in the queue. Every ambiguous sentence, every skipped step, every vague impact statement forces them to either make an assumption (which may be unfavorable to you) or invest additional time asking for clarification (which delays resolution and signals low report quality).
Your job as a reporter is to minimize reconstruction cost. The closer your report gets to fully instantiating your mental model in someone else's head with zero ambiguity, the better your outcome.
Where It Fits in the Workflow
Report writing isn't a post-exploitation afterthought. It belongs in your methodology as a first-class phase, sitting between Validation and Submission:
Reconnaissance → Testing → Exploitation → Validation → [REPORT WRITING] → Submission → Follow-up
Most researchers treat this as a 10-minute task after the real work is done. The researchers who consistently earn Critical payouts treat it as a 30–60 minute craft exercise. The finding is the raw material. The report is the product.
Key Concepts in Depth
1. The First Paragraph Is Your Entire Career in That Report
A triage engineer's first 90 seconds on your report are almost entirely spent on the first paragraph — the summary. This is not the place to "build up" to the vulnerability. Lead with the impact, not the mechanics.
Bad summary:
I was testing the
/api/user/updateendpoint and noticed that the
Good summary:
Unauthenticated horizontal privilege escalation via the
/api/user/updateendpoint allows any attacker to modify the email address of any user account by supplying a victim'suser_id. This bypasses account ownership entirely and enables full account takeover without prior authentication.
The good summary does four things in two sentences: names the vulnerability class, identifies the affected component, states the attacker precondition, and names the impact in plain terms. A triage engineer reading that second version already knows the severity range before they've read a single reproduction step.
Think of your summary as a thesis statement. Everything else in the report is evidence supporting it.
2. Reproduction Steps Are a Recipe — Not a Story
Reproduction steps are where most reports fall apart operationally. The triage engineer needs to reproduce your finding — either to validate it or to hand it to a developer with a clear repro path. Steps that skip assumed context, reference "my session cookie," or skip intermediate navigation destroy the reproducibility of the report.
Write reproduction steps as if you are writing them for a technically competent person who has never used the application before. Number every step. Be explicit about account types required (authenticated vs. unauthenticated, admin vs. regular user, two accounts required, etc.). Include the exact request if the vulnerability is in an API call.
Minimal viable structure:
1. Create two accounts: Account A (attacker) and Account B (victim).
2. Log into Account A and navigate to [URL].
3. Intercept the following POST request using a proxy:
POST /api/user/update HTTP/1.1
Host: target.com
...
4. Modify the `user_id` parameter from [Account A's ID] to [Account B's ID].
5. Forward the request. Observe that the response confirms Account B's email has been updated.
6. Log into Account B. Observe the email is now controlled by Account A.
Notice the format: imperative verbs, numbered, account roles named, expected observation at the end. The expected observation step is critical — it tells the triager what "success" looks like, so they're not left wondering if something went wrong when they replicate it.
3. Severity Justification Is Not a Feeling — Use CVSS or OWASP
"This is critical because an attacker could do a lot of damage" is not a severity argument. It is an assertion. Triage engineers and program managers apply frameworks — usually CVSS 3.1 or a platform's own rubric — and your report needs to map to that framework, not argue against it emotionally.
Learn the four CVSS base metric groups: Attack Vector, Attack Complexity, Privileges Required, User Interaction, Scope, Confidentiality, Integrity, Availability. Then explicitly state your reasoning:
CVSS 3.1 Estimate: 9.1 (Critical)
- Attack Vector: Network (exploitable remotely)
- Attack Complexity: Low (no special conditions)
- Privileges Required: None (unauthenticated)
- User Interaction: None
- Scope: Changed (attacker gains access to another user's account)
- Confidentiality: High / Integrity: High / Availability: None
Even if the program doesn't use CVSS formally, this level of specificity forces the triage engineer to engage with your reasoning rather than dismiss it. If they disagree with your scoring, they have to articulate why — which means the conversation happens at the right level of abstraction.
4. Business Impact: Translate for the Room
Many bug bounty programs route high-severity findings to a program manager who is not a security engineer. They're making resourcing decisions, severity overrides, and payout approvals — and they're doing it with a non-technical lens. A report that speaks only in technical terms gets downgraded at this stage, not because the vulnerability isn't real, but because the decision-maker can't defend the payout to their leadership.
Add a short "Business Impact" section beneath your technical description. Write it in plain language. Map the vulnerability to concrete, recognizable harms:
Business Impact: An attacker exploiting this vulnerability could silently take over any user account on the platform. From a business perspective, this represents unauthorized access to customer financial data, liability under GDPR Article 32 for failure to protect personal data, and potential for large-scale account fraud. Reported publicly or exploited before remediation, this would represent significant reputational and regulatory risk.
You are not being dramatic. You are doing the translation work that the program manager would otherwise have to do themselves. Doing it for them makes the escalation path frictionless.
5. PoC Calibration: Minimal Script vs. Full Exploit Chain
There is a common misconception that a more impressive PoC earns a higher payout. This is sometimes true at the margins, but more often a minimal, clean, readable PoC is more valuable to a triage engineer than a complex exploit chain. Here's why: their job is validation, not offense. They need the simplest possible signal that the bug is real.
Use a minimal PoC script when:
- The bug requires non-obvious request manipulation
- The reproduction depends on timing, state, or token behavior
- A raw HTTP request alone is ambiguous
Use a full exploit chain when:
- The severity depends on chaining — e.g., a stored XSS that only becomes Critical when you demonstrate it leading to account takeover
- You are arguing for a higher severity than the component alone would justify
- The program explicitly rewards demonstrated impact
For video PoCs (Loom or similar): use these when the bug involves UI behavior, race conditions, or multi-step flows that are hard to capture in static screenshots. A 90-second screen recording narrated as you reproduce the steps eliminates ambiguity completely. Keep it under 3 minutes. Show the before and after state.
⚠️ Never include credentials, PII, or data exfiltrated from real user accounts in your PoC. Blur or redact anything that isn't yours. This is both an ethical requirement and a practical one — reports containing real user data get quarantined, not triaged.
Alternatives & Comparison
Different contexts call for adjusted approaches:
| Scenario | Approach |
|---|---|
| Simple, well-understood bug (e.g., reflected XSS with clear PoC) | Short report, one-request PoC, standard CVSS breakdown |
| Complex chain (IDOR → session fixation → ATO) | Full narrative, step-by-step chain, video PoC, elevated business impact section |
| Unclear scope / borderline asset | Explicitly acknowledge the ambiguity upfront; cite the program's scope page |
| Private program with known internal terminology | Mirror the company's own language for their product components — shows familiarity |
| First report to a new program | Err toward over-documentation; build trust before optimizing for speed |
The common failure mode across all of these is optimizing for your clarity rather than their clarity. You already understand the bug. Write for someone who doesn't.
Takeaways & Further Reading
📚 Key takeaways:
- Lead with impact. The summary paragraph is your one shot at a first impression. Make it a thesis, not a preamble.
- Reproduction steps are a recipe. Number them. Name the account roles. State what "success" looks like.
- Anchor severity to a framework. CVSS 3.1 reasoning is not optional for high/critical findings — it forces the conversation into productive territory.
- Translate for non-technical stakeholders. A business impact section is not fluff — it's what gets an escalation approved by a program manager who can't read a HTTP request.
- Match your PoC complexity to the validation need, not to impressiveness. Minimal and reproducible beats complex and brittle.
Further Reading:
- HackerOne Disclosure Guidelines — understand what platforms actually expect
- CVSS 3.1 Specification Document — read this once properly; it will change how you score findings forever
- Google Project Zero Bug Disclosure Posts — study the structure of elite-level vulnerability write-ups
- The Art of Software Security Assessment (Dowd, McDonald, Schuh) — for building the technical vocabulary that makes your reports precise
- Nahamsec's public talks on bug bounty methodology — practical, community-grounded perspective on the reporting craft
The best researchers are not always the best paid. The best paid researchers are the ones who understand that security research has two products: the finding and the report. Finding the bug is the technical work. Writing the report is the communication work. Both are skills. Both can be improved. And unlike exploitation techniques, report writing is something you can practice on every single submission, starting today.
Found this article interesting? Follow me on X and LinkedIn.