A security audit is only as valuable as the report it produces. You can spend weeks reviewing code, uncovering critical vulnerabilities, and building sophisticated proofs of concept - but if the report is unclear, poorly structured, or missing context, those findings may never get fixed.
At Zokyo, we have reviewed and authored hundreds of security audit reports across smart contracts, protocols, and infrastructure. The pattern is consistent: reports that communicate clearly get issues resolved quickly. Reports that do not, regardless of how technically impressive the work behind them was, end up collecting dust.
This guide breaks down what separates a good security report from a great one. Whether you are an auditor writing findings for a client, a bug bounty hunter submitting to a platform, or a security team producing internal assessments, the principles remain the same.
Why Report Quality Matters
A security report is the primary deliverable of any audit engagement. It is the artifact that development teams use to prioritize remediation, that executives use to make risk decisions, and that compliance teams reference during regulatory reviews.
Poor reports create real problems:
- Findings get deprioritized because the developer does not understand the impact
- Duplicate work occurs when teams cannot determine whether an issue has already been identified
- Trust erodes between the audit team and the client when reports feel like boilerplate output
- Vulnerabilities remain unpatched because the recommended fix was vague or impractical
Conversely, a well-written report accelerates remediation. It helps developers fix issues without back-and-forth clarification. It gives leadership confidence in the security posture. And for auditors, it builds reputation and trust - leading to repeat engagements and referrals.
Anatomy of a Finding
Every individual finding in a security report should be self-contained and actionable. A reader should be able to understand the vulnerability, its impact, and how to fix it without needing to read any other section of the report.
Title
The title is the first thing a developer reads. It should be specific, descriptive, and immediately convey the nature of the issue.
"Access Control Issue" - This tells the reader almost nothing. Where? What kind of access control? What is the consequence?
"Missing Authorization Check in withdrawFunds() Allows Any User to Drain Contract Balance" - Specific function, specific impact, immediately actionable.
A good title pattern follows the structure: [Root Cause] in [Location] Leads to [Impact]. This gives the reader everything they need at a glance.
Severity Classification
Every finding needs a severity rating. The most common framework uses four levels:
- Critical: Direct loss of funds, permanent denial of service, or full privilege escalation. Exploitable without special conditions.
- High: Significant financial loss or functional compromise under realistic conditions. Requires some preconditions but is practically exploitable.
- Medium: Limited impact or requires unlikely but possible conditions. Often involves partial loss of functionality, data exposure, or griefing.
- Low: Minimal direct impact. Best practice violations, gas optimizations with security implications, or issues that require extensive preconditions.
In Web3 specifically, severity is often tied directly to financial impact. A vulnerability that enables draining of protocol funds is Critical regardless of how complex the exploit path is. Context matters - a reentrancy vulnerability in a contract holding $50M is fundamentally different from the same bug in a test deployment.
Severity should reflect the realistic worst-case impact, not the theoretical maximum. An issue that requires three separate admin key compromises plus a specific block timestamp alignment is not Critical just because the theoretical outcome is fund loss.
Description
The description is where you explain what the vulnerability is and why it exists. This section should answer three questions:
- What is the issue? A clear, technical explanation of the flaw.
- Where does it occur? Specific file, function, and line references.
- Why does it exist? The underlying root cause - missing validation, incorrect assumption, logic error, etc.
Avoid jargon without explanation. The description should be readable by a senior developer who is familiar with the codebase but may not be a security specialist. If a concept requires context (such as a specific attack vector like flash loan manipulation or oracle price deviation), explain it briefly.
Proof of Concept
A proof of concept (PoC) transforms a theoretical finding into a demonstrated vulnerability. It is the single most important element for getting a finding taken seriously and fixed promptly.
A strong PoC should:
- Be reproducible. Another engineer should be able to run it and see the same result.
- Be minimal. Strip away everything that is not essential to demonstrating the vulnerability.
- Show impact clearly. If the bug allows fund theft, the PoC should show a balance changing. If it enables unauthorized access, show the access being granted.
For smart contract audits, PoCs are typically written as test cases:
function test_unauthorizedWithdraw() public {
// Setup: deposit funds as legitimate user
vm.prank(alice);
vault.deposit{value: 10 ether}();
// Attack: withdraw as unauthorized user
vm.prank(attacker);
vault.withdrawFunds(10 ether);
// Verify: attacker received the funds
assertEq(attacker.balance, 10 ether);
assertEq(address(vault).balance, 0);
}
For bug bounty submissions in particular, the PoC is often the deciding factor between a valid payout and a rejection. Platforms like Immunefi explicitly require functional PoCs for most severity levels. A finding without a PoC is, in many triagers' eyes, an incomplete finding.
Impact Analysis
The impact section translates technical consequences into business terms. This is where you bridge the gap between what the code does wrong and what that means for the protocol, its users, and its funds.
Impact should address:
- Who is affected? All users, specific roles, or only the admin?
- What is the financial exposure? TVL at risk, maximum extractable value, or scope of loss.
- What is the operational consequence? Protocol freeze, governance capture, data corruption.
- Is the vulnerability actively exploitable? Is it already deployed, or is it in a pre-deployment audit?
Recommendation
Every finding must include a concrete, actionable recommendation for remediation. "Fix the access control issue" is not a recommendation. A recommendation should be specific enough that a developer can implement it without further consultation.
function withdrawFunds(uint256 amount) external {
// Add authorization check
require(
msg.sender == owner || hasRole(WITHDRAWER_ROLE, msg.sender),
"Unauthorized"
);
// Validate withdrawal amount
require(amount <= balances[msg.sender], "Insufficient balance");
balances[msg.sender] -= amount;
(bool success, ) = msg.sender.call{value: amount}("");
require(success, "Transfer failed");
}
Where multiple remediation approaches exist, briefly outline the trade-offs. For example, if an issue can be fixed with either a require statement or a modifier pattern, note both options and explain when each is preferable.
Structuring the Full Report
Individual findings are the core of any report, but the document as a whole needs structure to be useful. A well-organized report enables different audiences to extract the information they need without reading the entire document.
Executive Summary
The executive summary is written for non-technical stakeholders. It should be readable in under two minutes and cover:
- What was audited (scope, version, commit hash)
- When the audit was conducted
- How many findings were identified, broken down by severity
- The overall security posture assessment
- Key risks that require immediate attention
Avoid technical details in the executive summary. If a finding involves a reentrancy vulnerability in a yield aggregator's harvest function, the executive summary should say something like: "A critical vulnerability was identified that could allow an attacker to drain deposited funds from the protocol."
Scope and Methodology
This section documents exactly what was reviewed and how. It should include:
- Repository URL and commit hash - pinning the exact code version
- Contracts in scope - listing every file reviewed
- Contracts out of scope - explicitly noting what was excluded and why
- Testing methodology - manual review, automated analysis, fuzzing, formal verification
- Tools used - Slither, Mythril, Echidna, custom tooling
- Time allocated - total auditor-hours or auditor-days
This section protects both the auditor and the client. It sets clear expectations about what the audit did and did not cover.
Findings Overview Table
Before diving into individual findings, include a summary table that lists every finding with its ID, title, severity, and status (open, acknowledged, resolved). This allows readers to quickly scan the overall picture.
| ID | Title | Severity | Status |
|--------|------------------------------------------|----------|----------|
| ZOK-01 | Missing auth in withdrawFunds() | Critical | Resolved |
| ZOK-02 | Unchecked return value in token transfer | High | Resolved |
| ZOK-03 | Front-running risk in swap execution | Medium | Acknowledged |
| ZOK-04 | Floating pragma in all contracts | Low | Resolved |
| ZOK-05 | Missing event emission on state change | Low | Open |
Detailed Findings
This is the body of the report. Each finding follows the structure outlined above: title, severity, description, PoC, impact, and recommendation. Findings are typically ordered by severity (Critical first, then High, Medium, Low) and within each severity level, by potential impact.
Appendices
Use appendices for supplementary material that supports findings but would clutter the main body:
- Full PoC test suites
- Static analysis tool output
- Test coverage reports
- Gas optimization suggestions (if out of main scope)
- Informational observations that do not constitute findings
Common Mistakes in Security Reports
Presenting Tool Output as Findings
Automated tools like Slither and Mythril are valuable for initial triage, but their raw output is not a finding. A Slither detector flagging a "reentrancy" does not mean a reentrancy vulnerability exists. The auditor's job is to validate, contextualize, and communicate - not to reformat tool output.
If a tool helped identify a real issue, reference it. But the finding itself must include human analysis: why is this actually exploitable in this specific context? What is the realistic attack path?
Writing Without Context
A finding that says "the function does not check for zero address" is technically accurate but lacks context. Why does this matter? Can someone actually pass a zero address in a realistic scenario? What happens if they do? Does the contract hold funds that become permanently locked?
Always answer the implicit "so what?" question. If you cannot articulate why a finding matters in practice, it may not belong in the report - or it may belong as an informational note rather than a rated finding.
Vague Recommendations
Recommendations like "implement proper access control" or "add input validation" are not actionable. They describe the category of fix without specifying the implementation.
Instead, provide code-level recommendations. Show the exact check, modifier, or pattern that resolves the issue. If the fix involves architectural changes, describe the approach in enough detail that a developer can plan the work.
Severity Inflation
Rating everything as High or Critical undermines the entire severity system. When a report contains twelve Critical findings and eight of them are gas optimizations or informational notes with inflated severity, the three genuinely Critical issues get buried.
Be honest about severity. A low-impact issue is a Low. Calling it Medium because you want it to get attention does the opposite - it trains developers to distrust severity ratings across the board.
No Resolution Tracking
A report that lists findings without tracking their resolution status is incomplete. The report should be a living document (at least through the remediation phase) that records:
- Whether each finding has been addressed
- The commit hash of the fix
- Whether the fix was verified by the auditor
- Any findings that were acknowledged but not fixed, along with the client's rationale
Writing for Different Audiences
A security report is read by multiple audiences, each with different needs:
Developers
Developers need precise technical detail: exact file paths, line numbers, function names, and code-level fix recommendations. They want to understand the root cause quickly and implement a fix efficiently. For developers, the PoC and recommendation sections are the most valuable.
Executives and Decision-Makers
Executives care about business risk, not exploit mechanics. They need to know: How much money is at risk? Is the protocol safe to launch? What is the priority order for fixes? The executive summary and severity distribution are their primary entry points.
Compliance and Legal Teams
Compliance teams need the scope and methodology section to verify that required security reviews were conducted. They may need the report as evidence for regulatory filings, insurance applications, or partner due diligence. Clarity, completeness, and professional formatting matter here more than technical depth.
Community and Public Disclosure
For public audit reports (common in DeFi and open-source projects), the audience includes investors, users, and other security researchers. These readers need enough technical detail to evaluate the audit's thoroughness, but the report should also be accessible to informed non-specialists.
Web3-Specific Considerations
Security reporting in Web3 carries additional considerations that do not exist in traditional software auditing:
Immutability and Deployment Context
Once a smart contract is deployed to mainnet, vulnerabilities cannot be patched in the traditional sense. This elevates the importance of pre-deployment audit reports. Every finding carries more weight because the window for remediation closes at deployment.
Reports should clearly distinguish between pre-deployment and post-deployment contexts. A Medium finding in a pre-deployment audit might warrant a Critical-level response urgency if the contract is about to be deployed immutably with significant TVL expected.
Economic Attack Vectors
Web3 security reports must account for economic attacks that have no analogue in traditional security: flash loan manipulation, oracle price deviation, sandwich attacks, MEV extraction, and governance manipulation. These require the auditor to think like a financially motivated adversary, not just a technical one.
When documenting economic attack vectors, include specific numbers where possible. "An attacker could use a flash loan to manipulate the price oracle" is less useful than "An attacker borrowing 10,000 ETH via Aave could manipulate the Uniswap V3 TWAP oracle by approximately 15%, enabling extraction of approximately $2.3M from the lending pool."
Composability and External Dependencies
DeFi protocols rarely exist in isolation. They interact with other protocols, oracles, bridges, and governance systems. Security reports should document external dependencies and the assumptions made about their behavior.
If a protocol assumes that a particular oracle will always return a price within a certain range, or that a DEX will always have sufficient liquidity, document these assumptions explicitly. When those assumptions can be violated, document the consequences.
From Good to Great: What Separates the Best Reports
The difference between a competent report and an exceptional one often comes down to a few qualities:
- Narrative coherence. The best reports tell a story. They do not just list findings - they explain how the findings relate to each other and to the overall security posture. If three separate Medium findings can be chained together to create a Critical attack path, the report should say so.
- Threat modeling context. Great reports include a threat model section that describes the likely adversaries, their capabilities, and their motivations. This gives findings a framework: a finding matters more or less depending on who the attacker is and what they are trying to achieve.
- Clear writing. Security is complex enough without adding unnecessary complexity through poor writing. Use short sentences. Avoid passive voice where possible. Define terms on first use. One idea per paragraph.
- Visual aids. Architecture diagrams, attack flow diagrams, and annotated code snippets make complex attack paths easier to understand. A diagram showing the sequence of transactions in a flash loan attack is worth more than three paragraphs of description.
- Honest assessment. The best reports acknowledge limitations. If time constraints prevented full coverage of a particular module, say so. If a finding is borderline between two severity levels, explain the reasoning. Intellectual honesty builds trust.
Special Case: Bug Bounty Reports
Bug bounty submissions follow the same principles as audit findings, but the context is different. You are writing for a triager who may review dozens of submissions per day. Your report needs to stand out by being clear, complete, and immediately verifiable.
Tips for Effective Bug Bounty Submissions
- Lead with impact. The triager's first question is "does this matter?" Answer it in the first sentence.
- Include a working PoC. Not pseudocode. Not a description of steps. An actual test case or script that demonstrates the vulnerability.
- Reference the exact scope. Link to the deployed contract address, the repository commit, or the specific asset listed in the bounty program.
- Do not over-claim severity. Inflating severity to chase a higher payout backfires. Triagers downgrade aggressively when the claimed impact does not match the demonstrated impact.
- Be concise. A three-page submission with one clear vulnerability gets resolved faster than a ten-page submission that buries the finding in background research.
Closing Thoughts
Writing security reports is a skill that compounds over time. Every report you write teaches you something about communication, about risk, and about the gap between what an auditor knows and what a reader understands.
The goal is not to produce a document. The goal is to produce an outcome - vulnerabilities fixed, systems hardened, risk reduced. The report is the vehicle for that outcome, and the quality of the vehicle determines whether you arrive at the destination.
At Zokyo, we treat report writing with the same rigor we apply to the audit itself. Because the best security work in the world is wasted if nobody can understand it.