There has been a recent surge of AI and DePIN (Decentralized Physical Infrastructure) projects being built on blockchains. DePIN provides compute for AI models to be trained and run over a decentralized network of computing machines, offering a more decentralized and transparent approach compared to running AI on traditional centralized servers.
This convergence opens up a wide range of potential applications - from prediction markets to scam detection. But the biggest application of Crypto + AI in terms of importance and impact is using AI in Smart Contract Security.
If we want to secure blockchain networks from any more million-dollar hacks and losses, AI is the fastest and most impactful way to move forward. In this article, we explore how AI is being applied across smart contract defense and bug finding, what works today, what falls short, and where the real opportunities lie.
AI + DePIN: Decentralized Compute for AI
AI is traditionally run on centralized servers - large-scale data centers operated by a handful of cloud providers. DePIN promises a way of using AI that is more decentralized and transparent in its approach, by running it over a decentralized network of servers.
In this model, anyone with available compute resources can contribute to AI model training and inference. The benefits include:
- Reduced single points of failure: No single entity controls the compute layer
- Censorship resistance: Models and inference requests cannot be arbitrarily shut down
- Cost efficiency: Idle compute resources across the world are put to productive use
- Transparency: On-chain verification of compute tasks and results
This infrastructure layer is critical because it enables the next generation of AI-powered applications in blockchain security to run without relying on any single centralized provider.
Formal Verification of Smart Contracts Using AI
One of the most promising ways to use AI's potential in smart contract security is in formal verification - the process of mathematically proving that a smart contract behaves exactly as specified, under all possible inputs and conditions.
Formal verification using AI was recently demonstrated by researchers from MetaTrust Labs and NTU Singapore. They developed a viable LLM-driven AI-based formal verification system called PropertyGPT.
PropertyGPT: LLM-Driven Formal Verification
PropertyGPT was trained on 623 human-written properties collected from 23 Certora projects. The system takes a smart contract as input, generates formal properties that should hold true, and then verifies them against the actual contract behavior.
The results were impressive. PropertyGPT successfully detected 9 out of 13 CVEs and 17 out of 24 attack incidents. This represents a significant step forward for practical AI-based formal verification.
To validate the real-world applicability, the researchers submitted their findings to Code4rena - a competitive smart contract auditing platform where security researchers earn bounties for discovering vulnerabilities. PropertyGPT successfully claimed $8,256 in bounty rewards, demonstrating that AI-driven formal verification can find real, exploitable issues that human auditors might miss.
Custom-trained AI models like PropertyGPT, built on domain-specific data from Certora projects, significantly outperform general-purpose LLMs for formal verification tasks. The specificity of training data matters enormously.
Real-Time Monitoring and Threat Detection
Another one of the most promising use cases of AI in blockchain and crypto is real-time monitoring and threat detection for smart contracts. Rather than catching vulnerabilities before deployment, this approach focuses on detecting and stopping attacks as they happen on-chain.
Mamoru.ai: Decentralized Threat Detection
One project leading this approach is Mamoru.ai, which detects and blocks attacks in real time. The system works by deploying decentralized nodes that keep watch for malicious activities and threats across blockchain networks.
What makes Mamoru particularly interesting is its flexibility:
- Custom rule creation: Rules can be written depending on the specific project logic, rather than relying solely on generic detection patterns
- Decentralized monitoring: Nodes are distributed, so there is no single point of failure in the monitoring infrastructure
- Real-time blocking: Detected attacks can be blocked before they complete, potentially preventing fund loss
- Adaptive detection: The system can learn from new attack patterns and update its detection capabilities accordingly
The combination of AI-powered analysis with decentralized infrastructure creates a defense layer that traditional centralized monitoring tools cannot replicate at the same level of resilience.
Testing ChatGPT-4o on Smart Contract Vulnerabilities
Given the hype around large language models, the Zokyo team ran our own tests on vulnerable code to check if the latest version of ChatGPT (ChatGPT-4o) is able to detect issues in smart contracts correctly.
We tested ChatGPT-4o against a set of smart contracts containing known CVEs (Common Vulnerabilities and Exposures). The results were telling.
Results: Limited Detection Accuracy
ChatGPT-4o was only able to detect 2 CVEs correctly. Other CVEs were flagged incorrectly - the model identified vulnerabilities that were not actually present in the contract, producing false positives that could lead auditors down the wrong path.
This outcome highlights several important limitations of general-purpose LLMs for security analysis:
- Lack of execution context: LLMs analyze code as text, not as executable logic. They cannot trace state changes across multiple transactions
- Hallucinated vulnerabilities: The model confidently reports issues that do not exist in the code, which is dangerous in a security context
- Missing composability analysis: Smart contracts interact with other contracts, and LLMs struggle to reason about cross-contract dependencies
- No formal guarantees: Unlike static analysis or formal verification tools, LLM outputs carry no mathematical certainty
General-purpose AI models like ChatGPT are useful for documentation, code commenting, and initial code review. But they should not be relied upon as primary vulnerability detection tools. We need more robust custom-built and trained AI models for carrying out vulnerability detection in real-world codebases.
AI-Assisted Security Auditing
While AI alone is not ready to replace human auditors, it can significantly enhance the auditing process when used as an assistive tool rather than a replacement.
Where AI Adds Value Today
Using AI-assisted tools allows auditors to focus more on fully exploring the attack surface and identifying more critical and complex bugs. The areas where AI provides the most immediate value include:
- Documentation and code commenting: AI tools make it easier to produce precise and informative documentation, allowing projects to maintain better code quality from the start
- Initial vulnerability scanning: AI can serve as a first-pass filter, flagging potentially suspicious patterns for human review
- Pattern recognition: Common vulnerability patterns like reentrancy, integer overflow, and access control issues can be detected quickly
- Code summarization: Complex contract logic can be summarized to help auditors understand codebases faster
Current Limitations
Despite these benefits, AI-assisted auditing has clear boundaries:
- Business logic vulnerabilities: AI cannot understand the intended behavior of a contract without external specification, and business logic bugs are often the most critical
- Novel attack vectors: New vulnerability classes that were not in the training data will be missed entirely
- False sense of security: Teams that over-rely on AI tools may ship code with a false confidence in its security
- Context-dependent risks: The same code pattern might be safe in one context and exploitable in another - contextual reasoning remains a human strength
The Path Forward: Custom Models Over General-Purpose AI
The evidence from PropertyGPT's success and ChatGPT-4o's limitations points clearly in one direction: custom-trained, domain-specific AI models are the path forward for smart contract security.
General-purpose models are trained on broad internet data and lack the specialized knowledge needed for reliable security analysis. Custom models trained on audit reports, known vulnerability databases, formal verification properties, and real exploit data can achieve significantly better results.
The key factors for building effective AI security tools include:
- Domain-specific training data: Models trained on actual audit findings, CVE databases, and Certora properties outperform generic models
- Integration with formal methods: Combining AI pattern recognition with mathematical verification creates stronger guarantees
- Human-in-the-loop design: The most effective approach keeps experienced auditors in the loop, using AI to augment rather than replace human judgment
- Continuous learning: Models need to be updated as new vulnerability classes and attack patterns emerge
Conclusion
AI has promising synergies with crypto, especially in smart contract defense and bug finding. From formal verification systems like PropertyGPT to real-time threat detection platforms like Mamoru.ai, the applications are both diverse and impactful.
However, the current state of the technology demands a nuanced approach. General-purpose LLMs like ChatGPT produce too many false positives and miss too many real vulnerabilities to be trusted as standalone security tools. The future lies in custom-built, domain-specific models that combine the pattern recognition capabilities of AI with the rigor of formal verification methods.
At Zokyo, we believe that strategic AI deployment - combined with experienced human auditors - represents the fastest and most impactful way to secure blockchain networks from million-dollar hacks and losses. The technology is advancing rapidly, and the projects that integrate these tools thoughtfully will have a significant security advantage.