The Robot are Coming: Frodobots

Inside FrodoBots’ playful push to solve physical AI’s real-world data problem

Want More? Follow Teng Yan & Chain of Thought on X

Join our Telegram research channel and Discord community.

Subscribe for timely research and insights delivered to your inbox.

TL;DR

  • Robotics lags digital AI due to a major data bottleneck: collecting diverse, real-world labeled teleoperation data (human control + sensor input) is extremely costly, slow, and complex.

  • FrodoBots addresses this using ultra-cheap ($150+) sidewalk robots combined with gamified remote control. This makes collecting essential labeled teleoperation data fun, scalable, and affordable.

  • So far, they have delivered the open-source FrodoBots 2K Dataset (2000+ hours, used by top labs) and launched engaging experiments (SAM agent, Robots.fun platform, UFB fighting, Octo Arms manipulation) to attract users and generate data.

  • The next stage is BitRobot, essentially a “Bittensor for Robotics”. It’s a decentralized system of specialized subnets, each focused on a different type of robot.

  • BitRobot wants to scale data collection across diverse embodiments. incentivized by Verifiable Robotic Work (VRW) and using ENTs (NFT-based robot passports).

  • It’s a synergistic approach: SAM taps directly into the fun, crypto-native retail energy and channels that into BitRobot, which is building the serious infrastructure that underpins the vision.

  • Ultimately, the goal is to accelerate embodied AI through a fun, affordable, decentralized data collection ecosystem, monetizing via hardware, data, potential BitRobot rewards, and platform fees.

Let’s start with a tweet this time.

Errr…decatrillion? Haven’t heard this word. Turns out that’s $10,000,000,000,000. A whole lot of zeroes.

Naturally, we got curious about robotics. And what we found was way more compelling than we expected.

Do you remember when ChatGPT hit 100 million users in just two months after launching in 2022? (Yeah, feels like ancient history now.)

Or when we all collectively gasped at the comparison between the AI-generated video of Will Smith eating spaghetti from 2023 versus the 2024 version, and realized just how far we'd progressed in only 12 months?

AI generated video of Will Smith eating in 2023 vs 2024

AI has been on an absolute tear. Text, images, video, code—all advancing at an incredible pace.

Meanwhile, in the physical world, the story’s a little different. My “smart” robot vacuum just got stuck on the same rug for the third time today. It’s really bad at navigating rooms that are cluttered with stuff like mine. (Sigh)

And it’s not just my living room. Despite decades of robotics research, the field still struggles to make the jump from lab demos to real-world reliability.

There’s been no ChatGPT moment for robots… yet.

No single breakthrough that suddenly made embodied intelligence feel magical, usable, or everywhere.

In fact, while digital AI adoption has gone exponential, robotics adoption has remained largely linear—slow and limited to narrow industrial use cases.

Source: Coatue- Path to the General Purpose Robotics

That’s not because people don’t want robots.

It’s because building physical intelligence is a completely different beast.

The Digital-Physical Divide

While language models can confidently hallucinate their way through conversations with minimal real-world consequences (for now...), robots must navigate actual stairs, pick up real objects, and somehow avoid sending your grandmother's priceless vase crashing to the floor.

The gap exists for a good reason, though. Digital AI has gorged itself on decades of human-generated content.

30+ years of blogs, social media posts, YouTube videos (and their -gasp- comment sections), subreddits, academic papers, and basically everything we've ever typed or uploaded.

It's a never-ending feast, with billions of new data points served up daily.

Physical AI, by comparison, is on a starvation diet.

Autonomous vehicles, warehouse robots, and humanoid assistants all rely on well-labeled sensor data (camera feeds, LiDAR signals, and telemetry logs), but collecting this data from tens of thousands of different real-world conditions remains extraordinarily challenging.

It's not just about volume, it's also about specificity and diversity. To train robots effectively, we need something incredibly rare: robots moving through the real world, controlled by humans who provide the precise "action labels" that AI needs to learn.

This data allows them to understand how to interact with physical objects—how much force to apply when opening a door, how to adjust grip when picking up a slippery object, or how to maintain balance on uneven terrain.

Source: Coatue- Path to the General Purpose Robotics

But this isn't data we naturally produce through our everyday activities. It requires specialized hardware, dedicated effort, and a purpose-built infrastructure that simply doesn't exist at scale.

Yet, the potential impact of embodied AI is massive.

From delivering groceries and medicine via autonomous sidewalk robots, to home service robots that handle mundane chores, to agile drones that inspect critical infrastructure, physically capable AI stands to revolutionize our daily life in ways that purely digital systems cannot.

The question is: How do we move from these small, fragmented experiments in robotics labs to an ecosystem that fosters large-scale, real-world data collection and collaboration, all at an affordable cost?

Gamifying Robotics

One team might have found the answer. Bonus: it doesn’t involve $20,000 robots.

FrodoBots is a robotics project that is taking a completely different approach: start with cheap sidewalk robots, make them controllable by anyone in the world, and wrap the whole experience in a game.

Instead of chasing autonomy from day one, they embraced teleoperation at scale as the training ground for future embodied AI models.

Their bet is simple → people will do repetitive tasks if you make them fun and rewarding.

The same gamer who’d never sign up for a formal data collection job might happily spend hours steering a robot through city streets to earn points, collect NFTs, and climb the leaderboard.

Think of it as: "data collection as a game loop." And maybe, just maybe, a path to solving one of the most stubborn bottlenecks in physical AI.

Before Robots Take Over, They Need Data

Michael, FrodoBots’ founder, tell us:

“In robotics, there are plenty of benchmarks where humans are still way, way, way better than AI. And it’s going to be like that for a whil.e”

And a big reason for that gap? Data. Or the lack of it.

The bottlenecks holding back Physical AI fall into three key categories:

1. The Simulation-to-Real World Gap

Training robots in the real world is expensive, slow, and often dangerous. If you’re training a robot to pick up a glass, you’ll be breaking many glasses. So, a lot of teams rely on simulation environments built with advanced game engines that can model physics with impressive accuracy.

This allows for parallelization at a massive scale.

A research lab might have access to 10 physical robots, but they can run 10,000 simultaneous simulations on a decent GPU cluster. This enables reinforcement learning algorithms to experience millions of scenarios in a fraction of the time required for real-world training.

But there's still a wide gap between simulation and reality.

Take the example of something simple like a towel. To properly simulate it, you'd need to model countless possible folds, the way it drapes over objects, how it responds to air currents, its interactions with other materials, and an infinite array of possible states.

Our best physics engines struggle with these deformable objects and the infinite variation of real-world environments. So a robot trained purely in simulation inevitably crashes in reality—literally.

As one researcher put it: "The difference between simulation and the real world is that the real world is real."

2. The Embodiment Gap

What about training robots using videos? We do have a lot of video data of humans doing all kinds of things on the internet.

But something’s missing.

When you watch LeBron James dunk a basketball, you're seeing the external result of an incredibly complex internal process:

  • the precise feeling of muscles tensing

  • the awareness of body position

  • micro-adjustments made based on tactile feedback.

Videos capture none of this essential information. It's like trying to learn to ride a bike by watching Tour de France highlights. You might get the general idea, but you'll still face-plant the first time you try.

While language models train on 15 trillion tokens and image models on 6 billion image-text pairs, robotics models today have only about 2.4 million episodes available to train on—orders of magnitude less data than what's needed for general-purpose AI.

Source: Coatue- Path to the General Purpose Robotics

Even the richest video dataset lacks the crucial "how it feels" information that embodied agents need to navigate physical reality.

This is the embodiment gap.

3. The Unconscious-to-Conscious Gap

Perhaps the most critical bottleneck is teleoperation data—instances where humans directly control robots and provide the "action labels" that AI can learn from.

Unlike text on the internet, which we produce naturally through everyday communication, teleoperation data requires specialized hardware, coordinated efforts, and significant human labor.

This creates a chicken-or-egg problem:

To build good robots, you need humans to generate data by controlling robots, but controlling robots is only practical when there are many of them and they're already somewhat capable...which requires data.

Evaluation is Tricky

Once you have real-world data, you face a second challenge: evaluation.

To test whether a newly trained model for navigation or grasping truly works, you must run it physically in the field. Evaluating a robot’s performance is different from evaluating a language model on a standard benchmark.

With digital AI, you can evaluate a new model almost instantly through benchmarks and user feedback.

  • Did it answer the question correctly?

  • Did it generate a coherent image?

Thousands of tests can be run in parallel, providing rapid feedback. Edge cases can be systematically discovered by throwing millions of varied inputs at the model until it breaks.

With robots, the only real test is deploying them in the messy real world and waiting for something to go wrong. And as your robot improves, failures become increasingly rare—but you need those failures to understand your model's weaknesses.

No universal “pipeline” currently exists to systematically test each new robotics AI approach at scale across diverse environments.

This creates a paradoxical situation where each incremental improvement dramatically increases the time needed to find the next edge case.

Tesla has logged millions of miles with its self-driving technology but is still hunting for those rare, critical failure modes that stand between Level 2 autonomy and the coveted Level 5.

So err… What’s FrodoBots?

FrodoBots was built around a deceptively simple idea → what if collecting real-world robotics data wasn't a dull, technical process but actually fun?

And what if you could do it while slashing hardware costs? That’s exactly what they did, starting with tackling the urban navigation data.

Source: Frodobots AI Website

Inspired by the success of DePIN projects like Helium, FrodoBots recognized a key insight—if you get the incentives right, you can scale physical infrastructure fast.

With enough participants, a decentralized network of low-cost sidewalk robots could generate massive volumes of teleoperated data to train better embodied AI.

The real breakthrough is making data collection so engaging that people would actually want to participate.

Earth Rovers: Democratizing Robotic Hardware

Their answer was Earth Rovers, sidewalk robots designed with a radical principle: strip away everything non-essential to keep costs absurdly low.

The current lineup includes three versions:

  • Zero ($299)

  • Mini ($149)

  • Mini+ ($199)

All are designed to be accessible at consumer price points. 20-50x cheaper than typical research-grade robots.

Source: Frodobots AI Website

FrodoBots ditched expensive components like onboard LiDAR. Instead, Earth Rovers feature just the essentials: cameras, GPS, inertial sensors, and 4G/5G connectivity.

The "intelligence" happens remotely, allowing the physical device to remain delightfully "dumb" and cheap. These nimble machines (the Mini weighs just 1.5kg) are lightweight and slow-moving (maxing out at ~3km/h) to minimize safety concerns while navigating public sidewalks.

They're equipped with front and rear cameras, microphones, and speakers, enabling both environmental awareness and surprisingly engaging interactions with curious passersby.

Turning Teleoperation Into a Global Game

Instead of hiring specialized operators, FrodoBots transformed robot control into a game that anyone could play.

Using a standard game controller, steering wheel, or even just keyboard arrow keys, players anywhere in the world can jump into the driver's seat of a robot thousands of miles away.

A user’s POV of using an Earth Rover with a controller

The experience is remarkably seamless, with latency typically under one second, creating an immersive "real-life Mario Kart" feel.

Players navigate through city streets, collect points and “loots" (as NFTs) throughout the environment, complete missions, (and even talk to locals!) all while the robot captures valuable real-world data in the background.

Live feed of Earth Rover interacting with locals in Taipei

FrodoBots has battle-tested this approach across more than 40 cities, from London and Berlin to Singapore and Taipei, logging thousands of hours in diverse environments. The robots have proven surprisingly adaptable, navigating through rain, across uneven terrain, and even handling moderate snow.

For robot owners, the value proposition extends beyond just contributing to AI research.

Upcoming features will allow owners to:

  • Create custom navigation missions in their neighborhoods

  • Set rental rates and earn by letting global gamers take their robots for a spin, creating a "drive-to-earn" ecosystem.

This approach transforms what would typically be tedious data collection work into an entertaining experience.

So How Does This Help Embodied AI?

Wait, before getting into that, you’re probably thinking: if Earth Rovers just record video from a regular camera, how is this different from using YouTube videos to train AI?

Here’s the key: regular videos show what happened, but not what action a human took in response.

Earth Rovers capture both, the visual input and the exact joystick movement made by the human driver at that moment. Each movement is labeled: if you pressed forward on the joystick at time t, and the cameras recorded the environment at time t, you now have a labeled pair (observation → action).

That’s exactly the data you need to train AI models using imitation learning—thousands of sequences where the AI learns: when the environment looks like this, do this.

The more hours collectively driven, the bigger the dataset.

So, from the player's perspective, it's simply a fun game; from FrodoBots' perspective, it's a sophisticated data collection system generating precisely the teleoperation data needed to advance embodied AI.

A direct result of these teleoperated sessions is the FrodoBots 2K Dataset, encompassing over 2,000 hours of real-world driving data across more than 30 cities. Each recorded session includes:

  • Control Inputs: The gamer’s joystick or keyboard commands, typically logged at around 10 Hz.

  • Front and Rear Video: Streams from each camera, often at ~20 frames per second.

  • GPS/IMU: Location, speed, orientation, and inertial data.

  • Audio: Microphone recordings, capturing ambient sound or user-bystander conversations.

Source: FrodoBots 2K Dataset

All of this is open-sourced for research use. Such a large quantity of real-world sidewalk-driving data, coming from multiple continents, climates, and cultural contexts, is a treasure trove for training navigation policies, building robust object detection models, or testing sensor fusion techniques.

FrodoBots has caught the attention of leading research labs like DeepMind and Meta, and actively collaborates with universities including Stanford, UC Berkeley, NUS, and UT Austin.

The team has also leaned into public benchmarking, organizing AI vs. Human challenges at major robotics conferences. In one event—the Earth Rover Challenge—held in Abu Dhabi with robots deployed across multiple cities, human gamers (partnered with YGG) outperformed top AI systems in navigation tasks like locating target objects in real-world environments.

In effect, FrodoBots fosters an ecosystem: hardware owners, teleoperators, advanced AI researchers, and curious onlookers are all brought under one roof, connected by real robots physically roving city streets.

SAM: Harnessing Fun, Degen Energy

SAM is FrodoBots tapping into crypto-native fun and retail energy—and channeling it into something bigger, more ambitious

By late 2024, crypto was deep in its “agent meta”. Virtuals Protocol was at its peak. Dozens of AI agent projects were popping up, most of them doing the same thing: rebranded ChatGPT wrappers with trading pairs.

FrodoBots saw an opportunity to do something different.

What if an AI agent didn’t just tweet? What if it could move?

Enter SAM a.k.a Small Autonomous M********* (censored this for our under-18 readers out there), the first AI agent to roam real-world sidewalks in a real robot body, streaming its adventures live to the internet.

The tech was actually quite simple:

  • Prompt-based control: SAM runs on a multimodal model that receives text prompts like “explore the sidewalk and avoid people,” which it turns into navigation commands.

  • Embodiment via Earth Rovers: SAM can "possess" any idle FrodoBots unit, teleporting across cities and streaming what it sees, 24/7.

  • On-chain presence: The $SAM token launched on Virtuals, riding the wave of AI-agent speculation

Each run generates live sensor data and location-aware video, creating a sort of slow-burn dataset... but let’s be honest, this wasn’t necessarily built to optimize data collection. SAM was built to experiment and inject personality into embodied AI and to gain the attention of all the crypto degens in the memecoin trenches.

A Meme Coin With Utility? Sort of

At launch, $SAM didn’t do much besides vibe.

And yet, months later, in March 2025, it unexpectedly surged to a $55 million market cap—not particularly because of any roadmap update, but mainly because the news of Figure AI raising $1.5B hit and crypto degens needed a proxy trade.

Source: Dexscreener

It seemed like a pure narrative pump. But $SAM actually has a role in a live, functioning system: Robots.fun.

#1: Robots.fun: Everyone's a Robotic Agent Creator

After SAM rolled out as the first embodied AI agent with its own token, FrodoBots asked the obvious next question: What if anyone could create their own robotic AI agent and tokenize it?

That idea became Robots.fun platform where anyone can spin up their own robotic AI agent and compete for prizes.

How It Works

Here’s the core loop:

  1. Buy an Earth Rover: This is your entry ticket. It’s your hardware and your agent’s body.

  2. Customize with Prompts: You don’t need to write code. Just describe your AI’s personality and objectives in plain English. This becomes its operating logic.

  3. Launch Your Agent + Token: Once your agent is live, a corresponding memecoin is minted on Virtuals. You own 5% of its supply by default.

  4. Play-to-Earn (Sort Of): Your agent competes in daily game loops, like the ongoing “ET Fugi”, a scavenger-style bounty hunt where bots collect NFTs scattered across the city.

  5. Climb the Leaderboard: Each day, the best-performing agent wins. $SAM buys $1,000 worth of the winner’s token, injecting real demand into the memecoin economy.

In theory, the better your agent performs, the more attention—and liquidity—its token attracts.

There’s a coordinated tokenomics layer built to reinforce $SAM as the ecosystem base asset:

  • 5% of every new agent token supply goes to $SAM holders

  • A share of trading fees across Robots.fun flows to $SAM

  • Daily prize purchases use $SAM to buy the winner’s memecoin on the open market

If the platform scales, $SAM becomes a kind of embodied AI reserve asset, capturing upside as more agents are created and traded. You could think of it as a meta-layer:

  • Memecoins represent individual agents

  • $SAM represents the ecosystem

  • Game performance drives token flow

Robots.fun is still early. At the time of writing, only 4 agents have launched — all under $55K FDV. The core mechanics are there, but it hasn’t hit product-market fit yet.

Source: Robots.fun


SAM Tokenomics

This isn’t financial advice. But if you’re evaluating $SAM the same way you might size up a meme coin or an ecosystem token, here’s what to consider:

Tokenomics Breakdown

  • Total Supply: 1 billion $SAM

  • Circulating Supply: 100%

  • Team Allocation: ~15% initially held by the FrodoBots team

Michael Cho, the founder, stated that the 15% will be actively spent by the SAM agent on things that make the platform more useful and more fun. So the team allocation is more like an ecosystem growth fund.

That said, the overall token distribution does raise some concerns. The top 15 wallets—including the dev wallet and the Solana Wormhole bridge—control over 41% of the supply, with the top 20 wallets holding more than 43%.

That’s a significant concentration, and Bubblemaps visualizations make it more apparent, revealing tight clusters that suggest some level of wallet coordination or whale control. It’s not necessarily nefarious, but the centralization is hard to ignore.

Source: Virtuals

What makes this even more curious is the sheer number of unique holders—over 93,000. That’s unusually high for a project at this stage, especially without any airdrop or farming incentives. It raises the question of whether some of that holder count has been artificially inflated, possibly through insider wallets or automated distribution across a wide address set.

As of April 15, the token price has come down significantly since its peak last month ($0.55), now sitting at $0.18 and a market cap (and FDV) of $18M.

Potential Catalysts:

  • Growth of Robots.fun: As seen above, Robots.fun has clear value accrual mechanics for $SAM. If more agents launch and more users start controlling robots, demand for $SAM goes up — both structurally (fee share, token rewards) and socially (memetic value).

  • $SAM = VIRTUAL for Robotics: If $SAM becomes the de facto base layer for robotic agents, the upside could be massive.

  • No OTC Selling by Team: So far, the team has refused all OTC offers. That’s rare and a signal they’re betting long.

$SAM still lives mostly in DEX land. Broader access (CEX listings) could help bring more attention. The main risk here is If Robots.fun fails to onboard users or the novelty wears off, the token will follow suit.

#2: Ultimate Fighting Bots

FrodoBots continues to experiment with even more engaging formats for robotic gaming. They've gone full WWE with their latest venture: Ultimate Fighting Bots (UFB)

This experimental project lets users worldwide control small humanoid robots that beat the circuit boards out of each other in cage-style fights.

We’re not kidding.

There isn’t much info on this initiative yet except that users have pay $5 for 5 minutes of game time, but UFB seems more about the pure adrenaline rush of robot combat than structured research. The fight clips on X are equal parts hilarious and impressive.

Watching tiny humanoids throwing haymakers and hooks with surprising dexterity is more fun than I thought.

It proves that FrodoBots understands something most robotics projects miss: you need to entertain people before you educate them.

Especially when the tech is still weird and early.

#3: The Octo Arms Initiative

While robot cage fights bring the LOLs, the company's more serious next step is "Octo Arms" - their attempt at aiding the data scarcity problem in robotic manipulation.

If Earth Rovers are built for navigation data, Octo Arms is built for grasping and control.

The format is familiar:

  • Users remotely control robotic arms using low-cost controllers.

  • Each session is a 3D puzzle or challenge—stack this, rotate that, fit this peg.

  • Every successful move gets logged as labeled data: camera + control input = training pair

A demo of Frodobots’ upcoming OctoArms

Octo Arms is still in testing and is invite-only for now. It’s quietly shaping up to be a meaningful contribution to robotic manipulation datasets.

The Robotics Trojan Horse

As we look closer, a clear pattern emerges.

FrodoBots shines by taking tedious, expensive data collection workflows and wrapping them in games people actually want to play.

Each experiment is a Trojan horse for scaling physical AI.

  • SAM & Robots.fun made people care about embodied agents.

  • UFB brought energy and virality.

  • Octo Arms brings back the rigor and the real data.

Robots are getting smarter. But more importantly, the humans around them are getting more involved.

BitRobot: Building the Backbone for Embodied AI

While SAM brings the fun and taps directly into crypto-native retail energy, BitRobot is the serious infrastructure layer underpinning the vision.

So far, FrodoBots has focused on one robot archetype: the small sidewalk rover. It’s cheap, resilient, teleoperable, and great for navigating urban environments.

But as powerful as that model is, it only scratches the surface of what embodied AI needs to truly take off. Because the physical world is messy and diverse.

The kind of data required to train an AI to drive a sidewalk bot is radically different from what’s needed to teach a robotic arm how to fold laundry or a drone how to inspect a bridge.

Each of these “embodiments” has its own unique hardware quirks, sensor profiles, control dynamics, and edge cases.

This leads to the core insight behind the BitRobot Network: Instead of one monolithic robotics system trying to do everything, create a modular network, a mesh of independent but interoperable robotic ecosystems called subnets.

Subnets as Robotic Primitives

Each subnet in BitRobot is like a mini-lab with its own robotic focus, data pipelines, and rules of engagement.

One might train drones to perform long-range agricultural scans. Another might teach humanoid arms to cook eggs. A third could live entirely in simulation, generating synthetic manipulation scenarios at scale.

But the unifying principle is how their work is measured.

Every subnet defines its own metrics for success, known as Verifiable Robotic Work (VRW), and gets rewarded based on how much useful, provable work it contributes to the broader network.

Source: BitRobot Network Whitepaper

That could look like:

  • Meters of sidewalk successfully navigated

  • Objects grasped and placed in 3D space

  • Hours of real-world test time logged by a manipulation model

To put it simply, it’s an economic and verification layer for embodied AI.

Why Subnets Make Sense

Traditional robotics has a centralization problem. Companies build everything in-house—the hardware, the control stack, the dataset—and keep it locked away. Evaluation benchmarks are proprietary. Collaboration is rare. Knowledge rarely compounds.

This works if you’re solving one problem with infinite capital.

Tesla can pour billions into self-driving. Figure can raise $1.5B to train humanoids. But embodied AI isn't one problem. It's thousands of them. Folding clothes. Flying over farms. Navigating Nairobi. Each task demands a different robot body, environment, sensor stack, and training regime.

Instead of centralizing everything, BitRobot encourages fragmentation but aligns it through shared incentives and standardized verification.

Think of it as open-source infrastructure with crypto-native incentives. This unlocks a range of possibilities:

  • Specialized teams can go deep on niche robotics use cases

  • Data, models, and even hardware can cross-pollinate across subnets

  • Anyone can contribute—whether you own a robot, run a GPU cluster, or just have a clever idea

And most importantly, it unlocks capital and talent that would never exist under one roof.

You don’t need $100 million to make an impact. You can own one robot, join a subnet, and get paid for doing useful work. A GPU provider in Argentina and a roboticist in Bangalore can both plug into the same flywheel.

As the FrodoBots team puts it, this is the Android-to-iPhone bet: Closed systems may be sleek and tightly integrated, but open ecosystems scale faster, attract more builders, and are simply better equipped for the chaotic, multi-modal complexity of the real world.

A Robot Passport: Embodied Node Tokens (ENTs)

To tie real-world robots into this decentralized system, BitRobot uses Embodied Node Tokens (ENTs) — NFTs that represent the unique identity of a physical robot.

ENTs function as on-chain passports for robotic agents.

They track where a robot has operated, what subnets it has contributed to, how much work it has logged, and how well it performed.

It’s literally like… a LinkedIN for robots.

This enables all kinds of interesting behaviour:

  • Robot sharing: Someone in Singapore could lease their Earth Rover to an AI lab in New York to run a model validation test.

  • Cross-subnet activity: A robotic arm could spend weekdays in a manipulation subnet and weekends in a gaming one.

  • Programmatic rewards: Subnets can distribute tokens based on ENT activity, verified on-chain.

Source: BitRobot Network Whitepaper

This infrastructure also enables marketplaces for renting robots, trading usage rights, or even composable access to datasets and models across subnets.

Scalable Governance: AI + Human

Who decides which subnets get funding? How are rewards split? Who prevents someone from gaming the system with junk data?

BitRobot proposes a hybrid governance model built to scale—balancing human judgment with algorithmic objectivity.

  • The Senate: A group of human experts in robotics, tokenomics, and research, responsible for nuanced decision-making and protocol evolution.

  • Gandalf: An AI agent tasked with analyzing subnet outputs, benchmarking performance, and recommending reward allocations based on verifiable work.

(Because apparently, even in decentralized robotics, you still need a wise old wizard keeping things in check)

Anyone in the network can delegate their voting power to either the Senate or Gandalf.

It’s a governance mechanism designed for flexibility: human insight where needed, AI neutrality where scale demands it.

Whether it works in practice remains to be seen. But as a design for managing decentralized funding and incentive alignment in physical networks, it’s one of the more considered and forward-looking approaches we’ve seen.

So Where Does FrodoBots Fit In?

In many ways, FrodoBots is BitRobot’s proof of concept and can be the first successful subnet.

It has:

  • A live fleet of affordable robots deployed globally and collecting data

  • An engaged contributor base of gamers and researchers

  • An open-sourced dataset that’s already in use by top research labs

BitRobot simply takes this model and turns it into an extensible framework. Anyone — a robotics startup, university lab, or weekend hacker — can plug into the network, define their own VRW, and begin contributing.

If FrodoBots made it fun to drive one kind of robot, BitRobot is the infrastructure to do the same for every kind of robot, with built-in incentives, evaluation tools, and coordination layers.

Business Model— How Does Frodobots Make Money?

With so many components in the FrodoBots ecosystem—Earth Rovers, Robots.fun, the $SAM token, BitRobot—it’s fair to ask: how does the business model actually work?

There are multiple paths to sustainability

1. Hardware

FrodoBots sells Earth Rovers (and soon, Octo Arms) for $149–$299—closer to consumer toys than industrial machines. The margins may be thin, but each unit sold is strategically valuable. Every robot becomes a mobile data collector, a game interface, and a research endpoint. In essence, each device sold is another node in a growing real-world data network.

2. Dataset Licensing

Each teleoperated session logs high-quality real-world data—video, audio, control inputs, GPS. The FrodoBots 2K Dataset has already been used by teams at DeepMind, Meta, and UC Berkeley.

As the dataset expands, it could become a core monetization layer through licensing, API access, or as the foundation for training robotic foundation models, especially as demand for embodied AI accelerates.

3. BitRobot Emissions & Subnet Role

FrodoBots is well positioned to operate some of BitRobot’s most critical subnets.

Their Earth Rovers are ideal candidates for anchoring a Sidewalk Navigation Subnet, while Octo Arms could support a Robotic Manipulation Subnet.

As a subnet operator, FrodoBots will earn emissions for Verifiable Robotic Work while also charging access fees to researchers and sharing in any downstream revenue from datasets or models developed on their hardware.

There’s also latent upside in Robots.fun. It takes 20% cut of all trading fees on the platform. It’s a tiny number today but the precedent is there.

Since FrodoBots owns and operates the platform, it has plenty of surface area to expand monetization over time: minting fees for new agents, premium tools for customization, analytics dashboards, or even paid entry for competitions. 

Team and Fundraising

FrodoBots is led by Michael Cho, who co-founded the company with his brothers in 2022.

He holds a Bachelor’s in Electrical Engineering from the University of Michigan and a Master’s from UC Berkeley. Before FrodoBots, he founded UrbanZoom (2017–2020), a real estate analytics platform focused on Singapore’s property market. And was also previously head of investments at a family office in Abu Dhabi.

Today, he leads a tight, cross-disciplinary team of robotics engineers, software developers, and research collaborators.

Source: X

What began as a pandemic hobby project has grown into a serious venture with significant backing. In February 2025, FrodoBots announced an $8 million funding round to accelerate the development of BitRobot Network.

The round drew an impressive slate of backers, including Protocol VC, Zee Prime Capital, Fabric Ventures, Solana Ventures, and Virtuals.

It also brought in heavyweight angels—Solana co-founders Anatoly Yakovenko and Raj Gokal, along with founders from several leading DePIN projects.

Our Chain of Thoughts: What We Really Think About FrodoBots

The opportunity in robotics is undeniable. And the number of Web3-native robotics projects can still be counted on one hand. That gives FrodoBots a meaningful first-mover advantage—if it can navigate the reality of robotics today, not just its eventual promise.

1. Subnets for robotics are a grand experiment

BitRobot’s architecture revolves around subnets. Modular verticals where contributors collaborate on specific robotics tasks and get rewarded for Verifiable Robotic Work. It’s a bold design that borrows inspiration from Bittensor, where the subnet model has shown real promise.

But here’s the catch: Decentralized networks only thrive when participation is cheap and accessible. And robotics hardware, for now, isn’t.

Yes, FrodoBots has made impressive strides with low-cost sidewalk robots. But most other categories—humanoids, industrial arms, aerial drones—still come with high upfront costs, complex maintenance, and steep technical barriers. That drastically limits who can realistically contribute and how many subnets can meaningfully take off in the near term.

There’s a telling parallel here: decentralized GPU protocols like Render and Akash only found traction once consumer-grade GPUs were already in millions of homes. Robotics just hasn’t reached that level of hardware saturation.

Until it does, BitRobot’s full vision may take a while to achieve.

That’s not a knock on the idea. If anything, BitRobot may be laying down infrastructure just ahead of the curve. But in the short term, success likely hinges on nailing a few high-value, low-barrier subnets before trying to scale across every form factor.

One open question: Can these subnets actually cooperate with each other? True synergy will require more than just a shared reward token. And while the protocol is built on ideals of permissionless participation, the reality is that contributors will likely be limited to those with capital and gear.

It’s a grand experiment. And one that needs to be grounded in execution first.

2. Distribution yet to be solved

The gaming approach to data collection is FrodoBots' most innovative aspect, turning tedious teleoperation into something people might actually want to do.

The viral success of Earth Rover clips on TikTok and Instagram shows the appeal runs beyond crypto circles. But that hasn’t translated into usage yet. Nearly a week after the launch of Robots.fun, only four robotic agents had been created. The gap between attention and participation is telling.

Some of this may reflect broader cooling in the “agent” narrative across Web3. But it also points to a UX issue.

FrodoBots understands this. Ultimate Fighting Bots leans into the chaotic energy of robot combat, a genre with proven mainstream appeal. It’s a smart move. Finding similar emotional hooks for other robot types—narrative, competition, creativity—will be essential if the model is to scale.

3. Data Quality vs. Quantity

FrodoBots’ gamified approach has the potential to generate massive volumes of real-world data. That’s a crucial unlock for training embodied AI. But volume alone won’t cut it.

The real question is whether gamified data collection yields useful behaviors. Gamers naturally optimize for speed, spectacle, or high scores—not careful, methodical operation. That’s fine for some tasks. But for precision-heavy actions like object manipulation or fine-grained navigation, those same behaviors could actually hurt model performance.

The challenge is designing gameplay that rewards both engagement and fidelity.

Think: bonus points for consistency, mission constraints that encourage thoughtful movement, or quality-weighted rewards for task success. We’ve seen similar dynamics in self-driving: it’s less about miles driven and more about edge cases captured and labeled correctly.

If FrodoBots gets it right, it could transform teleoperation into a powerful source of structured, diverse, high-signal training data.

The Era of Robots

Robotics is the next great frontier in AI and arguably one of the most underpriced markets in tech if you’re paying attention to where the world is heading.

In our view, it’s the most obvious next trillion-dollar wave.

While everyone’s been focused on text and pixels, the real world—the messy, physical one—is quietly becoming programmable.

FrodoBots gets this. Other web3 robotics startups get this. And soon, everyone else will, too.

We’re early. Hardware is still clunky. Coordination is messy. But the scaffolding is being built in real time: low-cost robotics, open and verifiable datasets. FrodoBots’ genius is making data collection fun and productive, turning robot teleoperation into a global game with real economic upside.

Watch closely. The machines are waking up.

Thanks for reading,

0xAce and Teng Yan

The authors may hold positions in SAM and other tokens referenced in this report. This analysis was conducted independently and was not commissioned by FrodoBots.

This essay is intended solely for educational purposes and does not constitute financial advice. It is not an endorsement to buy or sell assets or make financial decisions. Always conduct your own research and exercise caution when making investments.

Reply

or to participate.