GM 👋
Welcome to essay #2 of our 30 days of COT.
Today, we’re sharing our mental frameworks that guide how we think about AI and crypto. It’s a messy space, and we’ve found these models useful for cutting through the noise. Hopefully, they give you a clearer lens too.
If this sparks something, share it on X or forward this on to a friend. If you’ve got a different take, post it and tag @cot_research (or me).
We’re early.
Not just in terms of market cap or developer adoption, but in how Crypto and AI actually understand and relate to each other.
Crypto spent the last decade building trustless systems that don’t rely on central coordination. AI, meanwhile, is absorbing data, learning patterns, and increasingly making decisions that used to belong to people. On their own, each is disruptive.
Together, their collision introduces second-order effects: emergent behaviors, new coordination models, and also a fair amount of chaos.
New categories emerge. Old assumptions break.
To stay oriented, we’ve been using a few simple mental models. Not to predict the future, but to track what’s working, what’s noise, and where the strongest signals are emerging.
We wanted to share them with you, in case they help you do the same.
At the intersection of AI and crypto, two primary forces emerge:
AI makes crypto easier to use: Intelligent agents and “chatGPT” interfaces remove friction from on-chain interactions. Users no longer need to understand wallets, seed phrases, or on-chain tooling to participate.
Crypto strengthens AI’s integrity: It anchors decisions in transparent systems. Verifiable data, public infrastructure, and open coordination mechanisms create boundaries for otherwise opaque models.
Most startups tilt toward solving a problem in one category.
Crypto has always had a UX problem. AI is now actively solving it. We’re seeing early momentum in three areas:
Trading agents
The volatility and fragmented nature of crypto markets make them fertile ground for AI-driven strategies. Agents can ingest real-time data, adapt to changing conditions, and uncover patterns at a scale that humans can’t match.
Security and Threat Detection
Agents can monitor blockchain activity in real time, flagging anomalies like phishing attacks and smart contract exploits. This adds a real-time defensive layer that evolves as new threats emerge.
Developer co-pilots
Smart contract tooling is getting faster and more reliable. Language models can now write, audit, and optimize Solidity code. The effect is compounding: lower bug rates, faster deployments, higher developer velocity.
Archetype | Primary value delivered | Revenue model |
---|---|---|
Trading agents | Capture price inefficiencies | PnL on own capital; fees in managed setups |
Security monitors | Reduce loss events and downtime for protocols | Subscription or usage-based API pricing |
Developer co-pilots | Improve smart contract quality and velocity | SaaS licenses or token-based pricing |
On the user side, AI assistants like Wayfinder, Giza, Fungi, and Orbit help users swap tokens, find optimal yields, or automate on-chain decisions. These tools lower the barrier to entry and make crypto usable for broader audiences.
The pattern is familiar. Complexity gets abstracted away. Power users benefit first. Then the rest of the market follows.
Zoom out far enough and we’ll start to see autonomous agents interacting with smart contracts. Value flowing machine-to-machine. Markets clearing without human input.
The direction is clear. AI is rapidly becoming foundational to crypto’s next phase.
AI is accelerating quickly. However, as models become more powerful and increasingly autonomous, core questions that once felt theoretical are coming into focus.
Who owns the data? Can we trust the outputs? What happens when no one’s in the loop?
Crypto offers a set of primitives built to answer those questions:
One of the biggest open problems in AI is proving that a model gave the right output for the right reason. This becomes even harder in systems where there is no central operator to enforce trust.
Crypto-native approaches are starting to fill that gap. Zero-knowledge circuits can verify that a model ran on specific inputs without revealing the data. Attestation systems compare outputs across multiple nodes to confirm integrity.
Protocols like Nillion and Atoma enable computation on encrypted data, keeping sensitive data protected during training and inference. This allows models to run on private data without ever exposing it.
Instead of relying on centralized labs to build and control models, new protocols coordinate training across networks of independent contributors. Data providers, compute operators, and model trainers are all compensated on-chain. Ownership and control becomes shared.
This is more than a design choice. As models grow and training costs spike, tapping into idle GPUs from small data centers or individual contributors becomes a practical requirement
We believe the larger and more durable opportunity is where crypto becomes essential infrastructure for AI itself.
With the AI market projected to reach $1.8 trillion by 2030, even a 5% share would represent a $60 billion opportunity. That’s enough room for entirely new product categories—verifiable inference networks, decentralized model registries, tokenized data exchanges.
Reply