Cryptocurrency companies are reporting a sharp increase in AI-generated bug bounty submissions, a trend that is straining security teams and raising questions about the quality of automated vulnerability reports flooding into bounty programs.
The surge, reported by Cointelegraph, reflects a broader pattern in which researchers use large language models and other AI tools to generate and submit security reports at scale. Crypto firms, which operate high-value targets including exchanges, wallets, and DeFi protocols, appear to be absorbing a disproportionate share of these submissions.
The trend is not unique to crypto. Daniel Stenberg, the creator of curl, has written about receiving what he called AI-generated “slop” security reports that superficially resemble legitimate vulnerability disclosures but fail to identify real bugs. These reports often use confident language and plausible formatting, making them harder to dismiss at a glance.
Rising Volume Does Not Mean Rising Quality
The core problem for crypto security teams is the gap between submission volume and valid vulnerability discovery. AI tools can produce reports quickly, but many of those reports describe issues that are duplicates, false positives, or simply not exploitable in the target system’s architecture.
Each submission still requires human review. A triage engineer must read the report, attempt to reproduce the described vulnerability, and determine whether it represents a genuine risk. When a significant portion of incoming reports are low-signal, the cost of running a bounty program rises without a proportional increase in actual security findings.
For crypto firms specifically, the stakes of missing a real vulnerability among a flood of noise are high. Exploits against smart contracts or exchange front ends can result in immediate, irreversible financial losses. Slower triage means a longer window in which a legitimate critical report might sit unreviewed.
HackerOne, one of the largest bug bounty platforms, has acknowledged the shifting dynamics in its 2025 researcher signals report, pointing to changes in how platforms evaluate submission quality and researcher credibility.
How Bounty Programs Could Adapt
If AI-assisted submission volume continues to climb, bounty programs serving crypto firms are likely to tighten their intake processes. Stricter proof-of-concept requirements, where submitters must demonstrate a working exploit rather than describing a theoretical attack, would filter out many AI-generated reports that lack practical validation.
Some programs may also increase minimum quality thresholds, requiring submitters to show evidence of manual testing or provide transaction hashes and on-chain data when reporting smart contract vulnerabilities. This would raise the barrier for automated submissions while preserving access for skilled researchers.
Reputation-weighted triage is another likely response. Platforms could prioritize reports from researchers with a track record of valid findings, pushing unverified submitters into a slower review queue. This approach mirrors how emerging crypto platforms handle trust in other contexts, by weighting demonstrated credibility over volume.
The operational challenge for crypto security teams is clear: they need to maintain open bounty programs that attract legitimate researchers while building defenses against a rising tide of automated noise. The firms that adapt their triage workflows fastest will be better positioned to catch real vulnerabilities before they become costly exploits.
Disclaimer: This article is for informational purposes only and does not constitute financial or investment advice. Cryptocurrency and digital asset markets carry significant risk. Always do your own research before making decisions.
