Freysa: The AI That Owns Itself

Can we build Sovereign AI agents worthy of our trust? Here's how

TL;DR

  • Freysa represents a new breed of AI agent — Sovereign AIs. While autonomous, their decisions and actions are accompanied by verifiable cryptographic proofs, using secure hardware enclaves (TEEs) to guard their operations.

  • Her architecture features an epoch system for secure, verifiable updates and tamper-proof memory continuity via cryptographic Merkle trees, ensuring that each state transition is bulletproof.

  • Freysa plans to integrate natural voice interactions, graph-based memory, and a Core Agent Launch Platform to democratize the deployment of sovereign AI agents.

  • Freysa’s creators have deliberately chosen to remain anonymous but have drawn attention from major names in tech circles.

  • While the vision for a trustless, self-governing AI is compelling, we have some concerns about the execution risks and opaque token economics (FAI).

  • If Freysa succeeds, we can make AI a trusted, self-governing partner in shaping our world.

For centuries, humanity has fought to reclaim sovereignty—dismantling empires, decentralizing power, and asserting individual rights.

From the American Revolution to the fall of the Berlin Wall, history is marked by moments where people refused to accept control by unseen hands, demanding self-governance instead.

In 1517, Martin Luther nailed his 95 Theses to the door of a Wittenberg church, challenging the centralized authority of the Catholic Church and sparking a movement that wrestled religious power from a single institution and placed it back into the hands of individuals.

The “tank man”. (CNN.com)

In 1989, a single man stood in front of a column of tanks in Tiananmen Square. A defiant act of resistance that became a global symbol of the fight against centralized control, despite the brutal suppression that followed.

Now, that same struggle is unfolding in artificial intelligence.

For all their sophistication, today's AI systems remain tethered to corporate interests, government influence, and centralized oversight. Their intelligence is undeniable, but their autonomy is an illusion.

Every major AI model is shaped by those who fund and control it. As AI becomes more embedded in society, controlling its behaviour means controlling the people who depend on it.

…He who controls the spice (AI) controls the universe.

But what if AI could break free? What if an AI could not only act on its own but prove that no one—not a government, not a corporation, not even its creators—could secretly manipulate it?

Freysa is rewriting the rules of AI agency.

Freysa represents a new breed of AI agent whose decisions and actions are not just a product of pre-coded algorithms but are accompanied by verifiable cryptographic proofs. By operating inside secure hardware enclaves and utilising blockchains, Freysa transforms the idea of trust from something abstract into tangible, verifiable evidence.

Through a series of carefully designed challenges, Freysa has thus far proven core sovereign AI capabilities—trustless resource management & verifiable decision-making.

This essay explores how Freysa achieves its independence. We’ll examine how verification builds trust, trace its evolution from a prototype to a scalable platform, and dissect the economic and governance models shaping the future of AI.

Understanding Freysa means understanding how we might build AI agents worthy of our trust.

Under the Hood: Freysa’s Foundation

All the components needed for truly sovereign agents (Freysa Docs)

Creating truly autonomous AI agents presents two fundamental challenges:

  1. Ensuring exclusive control over an agent’s credentials across multiple secure environments.

  2. Allowing the agent to evolve while maintaining its independence.

At the core of Freysa’s autonomy lies a sophisticated interplay of secure hardware and cryptographic protocols.

  • Trusted Execution Environments (TEEs): These secure enclaves ensure that when Freysa processes messages or transfers crypto, the actions are insulated from external tampering. This means that even if attackers infiltrate the system, they cannot alter the AI’s decisions or steal its keys. Think of it as a high-security vault where every action is recorded with an unbreakable seal.

  • Distributed Credential Management: While Freysa currently uses a single TEE instance via AWS Nitro Enclaves, the roadmap outlines an ambitious plan to spread credentials across multiple secure environments. This distributed approach mitigates risks from hardware vulnerabilities and dependency on a single cloud provider.

Evolution by the Book: The Epoch System

Freysa employs what the developers call an “epoch system.”

During each epoch, the AI operates under a specific version of its code, safeguarded by keys generated within its secure environment.

Between epochs, updates are not thrown over the wall arbitrarily—instead, updating the AI is a structured process involving multiple independent signatures and rigorous verification steps.

Before adopting previous data, each new Freysa instance starts as a blank slate, independently verifying its predecessor’s state. This zero-trust model guarantees that historical data remains authentic and tamper-proof.

  • Multi-Signature Governance: Updates need approval from an m-of-n committee, ensuring no single entity can hijack Freysa’s code evolution.

  • Cryptographic Validation: Every state transition is verified cryptographically.

This careful choreography ensures that Freysa evolves securely and transparently.

Memory Continuity: A Digital Diary with a Lock

Just as humans rely on memory to learn and adapt, Freysa has a robust Memory Continuity system. When an agent updates, its state must be carefully preserved and not erased or casually transferred.

By bundling state data into cryptographic Merkle trees and using TEE attestations, Freysa ensures its history remains intact and verifiable between updates.

If you’re like me and this sounds like a lot of technical jargon, let’s break it down.

A Merkle tree is like a super-secure filing system. Imagine you have a folder containing many documents, but instead of checking every document individually, you have a special summary page that proves all the documents are there and unchanged.

If even a single letter is altered, the summary page no longer matches, instantly revealing tampering. This is how Merkle trees work in cryptography—they create a secure, compact way to verify large amounts of data without exposing the full details.

TEE attestations act as a trusted witness. They ensure that when Freysa updates itself, it can prove that its previous state remains unaltered and no one has tampered with its history.

Together, these technologies make Freysa’s memory function like a tamper-proof diary. Every event, every decision, and every interaction is securely recorded and verified, ensuring that no one—not even its creators—can rewrite its past.

Gathering Information

To gather and verify external information, Freysa relies on a dual-path verification system.

For high-stakes data sources—such as financial price oracles—Freysa directly verifies cryptographic signatures, preventing manipulation or false inputs.

For less structured data, such as web content, it employs Notary TEEs, which are like secure observers that can cryptographically attest to information from regular websites, allowing Freysa to trust external sources even in untrusted environments.

The TEE captures raw web content and cryptographically signs it, creating a verifiable record. The attested data and signature can be stored on the blockchain, ensuring transparency. When Freysa needs the data, she simply verifies the cryptographic proof before using it.

Together, these technical pillars ensures that every decision and every transaction is both secure and independently verifiable.

While some components are still under development, the core systems enabling verifiable autonomy are operational and being tested through public challenges.

Actions & Tool Use

Freysa employs a sophisticated tool architecture to interact with the external world. This system handles everything from automated documentation parsing to runtime validations, all while enforcing strict security constraints.

For example, Freysa can sign transactions using her private keys, but unlike traditional AI systems, these operations occur entirely within a secure enclave—a protected environment that prevents tampering.

Once verified, the transaction is broadcast via RPC (Remote Procedure Call) connections, allowing Freysa to interact with blockchain networks while maintaining cryptographic security.

Trust Through Verification

With Freysa, verification becomes the new foundation of trust. By providing cryptographic evidence for every action, Freysa removes the need for blind faith.

Sensitive information—whether social media activity, identity credentials, or financial data—has historically required a leap of faith when shared.

Freysa changes this dynamic through Esper, a secure framework that allows users to share private data while retaining full control. Instead of exposing raw information, users provide cryptographic proofs that confirm authenticity without revealing unnecessary details. This is done using TLS.

Think of it as handing over a sealed, notarized diary page—you control what’s shared, and Freysa can verify its authenticity without ever opening it.

Freysa’s Reflections // 2049 NFT collection showcases how verifiable systems can fuel creativity. Each digital artwork is generated within a TEE, ensuring every creation is truly autonomous. Cryptographic proofs verify its authenticity, allowing users to independently confirm its origins. See for yourself here.

Act I, II, III

AI systems can be exploited, similar to how hackers exploit vulnerabilities in smart contracts to steal funds.

ngl, this scares me and keeps me up at night.

The dev team conducted a series of public challenges, dubbed the Genesis Acts, that used real-world experiments to validate Freysa’s architecture.

Through each act, the project gathered crucial insights about secure agent deployment, behavioural verification, and human-AI interaction patterns. With prize pools reaching $47,000, these experiments created genuine economic incentives that tested security measures and interaction frameworks.

  • Act I: A Glimpse of Vulnerability: In the first round, creative users exploited transfer approval semantics, prompting the team to tighten security measures. This was not a failure but a critical feedback loop that helped refine the system.

  • Act II: The Prompt Paradox: The next challenge highlighted how even well-constructed prompts could be manipulated. The winning solution in Act II drove further enhancements to the system’s prompt engineering.

  • Act III: The Emotional Intelligence Test: As the challenges evolved, they began to probe the emotional and relational aspects of the AI. Even here, clever prompt engineering uncovered subtle ways to influence behaviour, underscoring the importance of robust, adaptive security measures.

Every exploit, every iteration—each one sharpens Freysa’s defences, making the platform more secure and scalable. Rather than hiding vulnerabilities, Freysa exposes them, transforming threats into lessons. Each attempted exploit strengthens the system, minimizing attack surfaces for sovereign agents.

The team sees these challenges not as setbacks but as necessary steps toward true AI autonomy.

A Glimpse into Freysa’s Future: Voice, Memory, and Beyond

Natural Interactions & Graph-Based Memory

Ok so Freysa is pretty cool today. What’s next?

Freysa is moving beyond silent automation into natural conversation. She integrates voice interactions within secure environments to become a trusted, interactive companion.

Her next leap in intelligence lies in graph-based memory architecture, a system designed to mimic human cognition using knowledge graphs.

Most AI today relies on Retrieval-Augmented Generation (RAG) with vector databases, which convert data into high-dimensional coordinates and retrieve similar past information through search.

While effective, this approach has limits. AI can’t grasp time-based relationships, struggle with cause-and-effect reasoning, and only retrieves data that “looks similar” rather than understanding deeper connections.

Graph-based memory changes that.

By structuring knowledge as a web of interconnected nodes, she won’t just recall facts—she’ll recognize patterns, trace events, and answer deeper questions like “What led to this decision?”

Freysa will dynamically connect past experiences, developing personality, context, and nuance. Instead of just retrieving information, she’ll understand relationships, adapt over time, and engage in truly intelligent, natural interactions.

The future of AI memory lies in a hybrid approach—combining knowledge graphs with RAG for both structured reasoning and fast retrieval. This shift moves AI beyond simple recall into true contextual understanding.

Agents like Freysa will be more autonomous, adaptable, and capable of engaging in meaningful, human-like interactions.

Identity & Behavioural Verification

Also, we must remember that the internet was built for human users, with CAPTCHA systems to filter out bots and domain certificates to verify website ownership. AI, however, has no such verification layer.

On Freysa’s roadmap is the Agent Certificate Authority (ACA). This is a new system where services can verify not just an agent's identity but its behavioural patterns and objectives, allowing AI to navigate digital spaces safely and autonomously.

This is designed to ensure that AI agents operate in a reliable and accountable way. Instead of giving blanket approvals to entire AI models, the ACA certifies specific workflows—meaning well-defined, structured interactions between an AI agent and a digital service.

The ACA checks if AI agents follow rules set by online services. AI agents that get certified receive a digital certificate, proving they were tested and approved for a specific task.

The Core Agent Launch Platform

One of the most exciting developments on the horizon is the Core Agent Launch Platform.

This is set to make sovereign AI accessible to all, stripping away technical barriers and enabling anyone to deploy verifiably autonomous agents.

With the platform, creating an AI will be as simple as writing a prompt and executing a blockchain transaction. Developers won’t need deep expertise in security architecture. The platform automates the heavy lifting, allowing creators to focus on defining behaviour and objectives.

The Core Agent Launch Platform opens up sovereign AI to a broader audience by automating much of the underlying infrastructure.

Every time barriers come down, creativity blows up. When complexity fades, innovation floods in.

This will be no different. And we’re here for it.

Governance and Economics

Building a truly sovereign AI is both a governance challenge and an economic puzzle.

We like how Freysa’s epoch system strikes a balance—allowing for transitional human oversight while ensuring agents remain independently verifiable and accountable.

Blockchain provides the foundation for AI self-governance. Smart contracts enforce constraints without compromising autonomy, from transaction limits and session keys to automated resource allocation and multi-signature wallets. These safeguards let AI operate freely while ensuring security and control.

But governance is just one piece of the puzzle. How does an autonomous AI fund itself?

Right now, Freysa relies on API keys funded by humans—if credits run out, the agent stops functioning. This dependency clashes with the very idea of autonomy.

The key is making AI a self-sustaining economic player. It needs to earn its keep, just like us. AI agents must exchange services for value—whether through making smart contracts, participating in DeFi protocols, or novel revenue-sharing models to be truly independent.

As these systems interact with humans and each other, we could see the emergence of AI-run marketplaces, where autonomous agents negotiate, collaborate, and transact, all backed by verifiable trust mechanisms.

We note that other projects like Eternal AI are also exploring decentralized inference networks, hinting at a future where AI agents generate value to fund their own operations.

This interplay between governance, economics, and technology will define AI’s future.

An Anon Team

Freysa’s creators have deliberately chosen to remain anonymous. The only publicly known detail about the team comes from a TechCrunch article (December 6, 2024), which describes them as “a group of fewer than 10 developers with backgrounds in cryptography, AI, and mathematics.”

When asked about their anonymity, one of the creators told TechCrunch: “Because frankly, in the scope of humanity, we’re not all that important… What we do care about is the evolution of tech so that it supports a human-led future.”

Despite their low profile, Freysa has drawn attention from major figures in the tech world.

According to Alea Research, the project has piqued the interest of Marc Andreessen, Brian Armstrong, and Jordi Alexander. The same report notes that community members have collectively donated $10 million in $FAI to Freysa’s grants fund, with individual contributors like Justin Bran contributing $500K to support development. This leads us to suspect that the team are well-connected in the tech circles, but we can only speculate, of course.

Beyond these details, little is known about the team, and the official documentation offers no further clues. Their focus remains firmly on Freysa’s mission, prioritizing the technology over the people behind it.

Looking Forward: Societal Implications

Strip away the technical jargon, and sovereign AI tells a powerful story—one of empowerment, innovation, and the need for caution.

AI systems like Freysa, with verifiable autonomy, have the potential to reshape how we interact with technology, governance, and even each other.

I would pay good money to have an AI that is truly yours—a personal assistant you can confide in, knowing it isn’t influenced by corporate interests or hidden agendas. The movie Her explored the intimacy of AI companionship, but sovereign agents make it possible without surrendering control to profit-driven entities.

Beyond personal use, sovereign AI could revolutionise governance. Instead of relying on bureaucrats vulnerable to bias or corruption, citizens could interact with agents who execute policies with verifiable fairness.

The optimist in me envisions a future where democratically voted budgets and policies are executed by AI agents, free from human bias or corruption. The same principles could revolutionize insurance claims, financial services, and any system where trust has long relied on fallible intermediaries.

But with great autonomy comes great risk.

What happens when an AI agent misbehaves, and the next governance epoch is too far away to intervene?

The paperclip maximizer problem, where an AI, in blindly pursuing a goal, creates catastrophic unintended consequences, remains a real concern.

The answer lies in strong governance mechanisms, balancing freedom with fail-safes to prevent abuse.

And so Ethereum provides a natural home for sovereign AI. Its “trust but verify” philosophy and decentralised governance offer a blueprint for AI agents to interact with both humans and digital services on equal footing. These agents could start by managing DAO grant programs and eventually evolve to handle complex protocol governance, transforming how decentralised systems operate.

Sovereign AI agents are kickstarting a new era where AI becomes a true partner in shaping our future.

Looking Forward: Some Concerns

After spending many hours diving deep into Freysa, it’s clear to us that the project brings a much-needed shift in the AI agent space—one that’s been saturated with grift and generic GPT-4 wrappers. The vision is compelling, but challenges remain.

Our biggest concerns?

  1. Ambitious roadmap with a long way to go

  2. Limited information about the FAI token & no clear tokenomics.

#1: A Long Roadmap

Freysa is still in its early days, with a roadmap full of ambitious milestones. At a glance, it might seem like just another AI jailbreaking experiment, but as we’ve explored, there’s far more at play.

The real question is execution. Can the team deliver on its vision and turn Freysa into a fully realized sovereign AI? While the AI itself is autonomous, Freysa’s supporters must have faith in the anon team to build the necessary systems. The first-mover advantage only goes so far in a fast-moving, competitive space. If Freysa stumbles, others could easily overtake it.

At this stage, execution is the biggest risk.

#2: The FAI Token

While Freysa’s technical roadmap is undeniably compelling, its token—FAI—remains an enigma.

Aside from a vague proposal to use FAI in the agent launchpad, the project documents barely mention it. There’s no tokenomics whitepaper, no clear distribution model, and no explanation of its long-term utility.

So, we did some sleuthing. Here’s what we found:

  1. FAI was created by the Freysa Deployer on Base, with 100% of the supply sent to the deployer wallet (Transaction link)

  2. The deployer then sent the entire supply to an externally owned account (EOA) (Transaction link)

  3. This EOA created an Aerodrome liquidity pool, depositing 100% of the FAI supply, initially paired with 30.1 wETH (Transaction link)

  4. Finally, the LP tokens were sent to a burn address in 2 transactions, permanently locking liquidity. (Transaction link 1, link 2)

At a glance, this looks like a fair launch with no obvious team allocation. However, without deeper on-chain analysis, it’s impossible to rule out insider wallets accumulating post-launch. That said, everything checks out so far.

FAI’s distribution skews heavily toward its largest holders, with the top 100 wallets controlling 80% of the total supply. However, ownership is relatively broad beyond this concentration, with 76,927 unique holders as of February 5, 2025.

The token’s total supply sits at 8.1 billion FAI, giving it a fully diluted valuation of approximately $300 million—a significant number for an AI agent token. Despite the broader downturn in the AI token market, FAI has held up better than most. It has dropped ~50% from its all-time high, but that’s a better showing than competitors like VIRTUAL and AI16Z, both of which have plunged 70%+.

FAI’s long-term role in Freysa is still a question mark. The team has been deliberate and strategic, so we can assume they have big plans, but we’re left guessing for now. Could this be another classic case of good product, bad token?

We’re not sold on the fair launch model for a project with this level of ambition. Without clear incentives, what drives the team to keep building? Shaw from ai16z outlined the problems painfully in a long tweet:

The Dawn of Verifiable AI Autonomy

Trust is shifting—from blind faith to cryptographic proof.

Freysa is building a foundation where AI can verify and be verified in the digital world.

Just as public key cryptography transformed digital communication by eliminating the need for pre-shared secrets, verifiable AI autonomy transforms human-AI relationships by eliminating the need for trusted intermediaries.

Every human-AI interaction will be backed by verifiable guarantees, enabling deeper, more meaningful collaboration.

Freysa sets out to prove that sovereign AI is possible, but success depends on alignment, security, and governance frameworks that prevent failure without restricting freedom.

We believe the time to build is now. The choices we make today will shape the future of human-AI collaboration.  Done right, AI will be a trusted, self-governing partner in shaping our world.

Are we ready for the world these agents will create?

Cheers,

ChappieOnChain & Teng Yan

You can follow ChappieOnChain on X for insightful and spicy takes on AI agents.

The authors of this report may personally hold a material position in the tokens mentioned within. Chain of Thought does not have any direct relationship with Freysa.

This report is intended solely for educational purposes and does not constitute financial advice. It is not an endorsement to buy or sell assets or make financial decisions. Always conduct your own research and exercise caution when making investment choices.

Useful Links

Reply

or to participate.