
I had a stark realization this week: the robots are coming.
Founders working inside the field keep telling us the same thing: the tech is moving way faster than it looks from the outside. The technical hurdles are real, but they see them as solved in a matter of time, rather than an impossibility.
And when it happens, the market that opens up may be far larger than most forecasts suggest.
Let’s look at some early signs.
#1: Humanoid Robots Are Entering Early Production
Humanoid robots have moved from stealth development to active pilots. Companies like Tesla, Figure, Agility Robotics, Boston Dynamics, and Unitree are now publicly racing to build general-purpose systems that can walk, lift, grasp, and operate in environments designed for humans.
This is a shift from task-specific automation to more flexible physical agents, intended for a broader range of real-world applications.

We built an internal model to track how industrial-grade humanoid robots might scale through 2030, globally.
The forecast pulls from a mix of public signals: Tesla’s production ramp, BYD’s internal deployments, Chinese auto contracts from Unitree and UBTech, and mid-range shipment estimates from UBS and Goldman. We layered in our own assumptions: Tesla hitting its stride in manufacturing, no major industry setbacks, and a few other players roughly staying on track.
As of mid-2025, there are roughly 150 publicly disclosed humanoids in active use across the U.S., deployed in factory and warehouse pilots. These include early pilots by Tesla, Amazon/GXO (Agility Digit), BMW (Figure-01), Mercedes-Benz (Apptronik), and 1X NEO.
What happens in the coming years will follow a familiar pattern:
A successful pilot with 20 robots in one section of a factory justifies a larger order to automate an entire production line. An initial partner like BMW wouldn't jump from 20 robots to 20,000. A more logical next step is an order for 100-300 units to achieve a meaningful impact in a targeted area.
By 2026, companies like Figure and Apptronik will use their 2025 success stories to sign deals with new major customers to start their own pilot programs (another ~20-50 units per new customer).
The robotics companies will be ramping up their low-volume manufacturing capabilities. While still not mass production, they will be able to fulfill orders in the hundreds, not just the dozens.
The "First Wave" Effect: The success of the pioneers will trigger a wave of "fast follower" companies in automotive, logistics, and electronics manufacturing to initiate their own pilot programs to avoid falling behind competitively.
Based on these inputs, we project that the number of humanoids globally could reach 820,000 units by 2030, marking the shift from proof-of-concept to commercial scale.
#2: Economic Pressure Is Driving Adoption
The macroeconomic case for humanoid robots is strong. Human labor is by far the largest economic sector globally, at over $30 trillion annually. Labor shortages are rising due to lower birth rates, reduced immigration, and earlier retirements across major economies.
Labor is becoming more expensive and less available. This is a structural, not cyclical trend.
In this context, even partial automation of physical work becomes economically significant. Humanoids that can take over repetitive or hazardous tasks could address a broad range of gaps without requiring new physical infrastructure. The potential market is vast, even if only a small fraction of tasks are eventually automated.
Traditional market research pegs the global robotics sector at around $82 billion in 2024, with some forecasts reaching $448 billion by 2034. However, we believe these figures likely understate the true potential. As is common with emerging technologies, early forecasts often underestimate market size and overlook the breadth of applications that human innovation can unlock over time.
We would place the 2034 market closer to $1 trillion, suggesting over $900 billion in new opportunity within this decade alone.
#3: China Is Scaling Faster Than Anywhere Else
China now accounts for the majority of global industrial robot installations. In 2023–2024, the country deployed over 276,000 new industrial robots, making up 51 percent of all new global installations. This pace shows no sign of slowing.
Beijing is also backing the sector with record-scale investment. In March 2025, the National Development and Reform Commission (NDRC) launched a state-backed venture initiative targeting up to ¥1 trillion (roughly US$138 billion) in funding over the next two decades. The focus is robotics, artificial intelligence, and advanced manufacturing.
This is an order of magnitude larger than any previous state fund dedicated to robotics and signals a clear national strategy to dominate the next era of industrial automation.
#4: Private Capital Is Moving Quickly
Private investment in robotics has remained strong, even amid broader market volatility. In 2024, total funding reached US$7.2 billion. Early 2025 figures are only slightly lower and remain concentrated in high-profile raises for humanoid systems and physical AI platforms.
Figure AI closed a US$675 million round, while Physical Intelligence raised US$470 million. These are not edge bets. Investors are positioning around what they see as an execution-stage opportunity. The assumption is clear: the robotics breakout is no longer speculative, but near enough to fund.

The bets are big because the prize is bigger. Once robots are able to move through unstructured environments and complete tasks that today require a human, the demand will be exponential, spanning multiple industries, geographies, and use cases.
Physical Turing Test: The Inflection
When ChatGPT arrived, we all felt it. It could write, explain, joke, and help in ways that felt startlingly close to human. That moment was the turning point for AI. The Turing test was passed. ChatGPT became the fastest app ever to reach 100 million users (2 months!)
Robotics is still waiting for its own version of that shift. Dr Jim Fan at NVIDIA gave it a name: the Physical Turing Test.
Imagine the morning after a house party you threw for a swarm of college friends. The night was loud and loose. Music, dancing, drinks flowing. Then everyone left. Now it’s just you and the mess: cups scattered across the floor, bottles knocked over, unwashed plates
Everything is a mess (damn)

Can a robot enter the post-party house, clean the clutter, load the dishwasher, wipe the counters, and reset the furniture, so convincingly that you can’t tell whether a human or a machine did the work?
I asked ChatGPT to clean up the mess in the first image. Easy to imagine, much harder to actually do.

Even on the best real-world robotic task benchmark, Stanford’s BEHAVIOR-1K, a scripted “optimal” policy completes only 40% of runs in simulation and 22% on real hardware. Nearly half of the real-world failures come from problems with grasping.
Just moving through clutter is a challenge. Robots cruise at roughly 0.5 m/s inside cluttered environments, a third of the 1.4 m/s pace humans stroll without thinking.
We’re still an order of magnitude away from passing the physical Turing Test.
Part I: The Rise of “Generalist” Robot Policies
What gets us closer to passing the physical Turing test?
Most paths converge on the same idea: robotic AI that generalizes. Robots with the ability to reason through unfamiliar settings, adapt on the fly, and recover from mistakes without every edge case hardcoded.
That doesn’t come from clever hardware alone. It depends on the policy.
In robotics, a policy is the brain. It is the model that maps perception to action. It takes the robot’s current understanding of the world (its state) and decides what to do next. A good policy defines behavior. A great one handles surprise.
Traditional robots do fine in structured environments. Give them a narrow task and a fixed setup, and they’ll perform reliably. But the real world is unstable. Objects shift. Lighting changes. Task-specific systems collapse under that kind of variability.
That’s why the field is pushing toward generalist policies: models trained across diverse tasks, settings, and hardware. Similar to what we’ve seen with LLMs: pretrain broadly, then fine-tune lightly for each new context. The goal is not to hardcode every edge case but to teach transferable physical intuition.
If a robot can clean one apartment and handle the next without rewriting its logic, the economics shift. Hardware matters, but intelligence becomes the constraint.
Intelligence comes from diversity. For robots to be truly intelligent, they must live and learn among us. We need robots that can fail gracefully, so they can learn from mistakes.
Two capabilities make this possible: adaptability and autonomy.
Adaptability is the ability to learn from experience and generalize. If a robot can clean one sink, can it figure out another without starting from scratch? It depends on diverse exposure and models that keep learning without forgetting
Autonomy is about execution without supervision. Once the robot is in a new environment, can it operate end-to-end without human help? That includes sensing, planning, acting, and recovering from failure.
We got a robot to clean up homes that were never seen in its training data! Our new model, π-0.5, aims to tackle open-world generalization.
We took our robot into homes that were not in the training data and asked it to clean kitchens and bedrooms. More below⤵️
— #Physical Intelligence (#@physical_int)
4:56 PM • Apr 22, 2025
Foundation models for robotics aim to encode physical common sense: how objects behave, how to manipulate them, and how to move through space. On top of that, they support higher-level reasoning by deciding what to do, not just how.
Companies like Physical Intelligence and SKILD.AI are chasing this vision with serious funding. Their approach centers on a simple idea: scale the data, and the model will generalize.
The Data Bottleneck

Source: Coatue
The catch, though, is that physical AI faces a much steeper data hill than digital AI.
Robots don’t scale like software. Feedback loops are slow and costly. You can’t iterate a thousand times an hour. Each interaction burns hardware. Mechanical parts wear down. And no one wants a robot nanny learning by trial-and-error in their living room.
Unlike LLMs trained on trillions of words, robots learn from multimodal experiences: vision, audio, touch, force, proprioception, and interactions in 3D environments.
By one estimate, the largest robotics datasets today contain about 10⁶ to 10⁷ motion samples. Compare that to 10¹² examples common in language or vision training, and the asymmetry becomes stark. That is 6 orders of magnitude (1,000,000x) less.
A quick look at the available open datasets reveals just how little data we have, and how wide the gap still is:
Dataset | Size/Details |
---|---|
1M+ trajectories, 22 robot types, 527 skills | |
76K trajectories, 350 hours of data, | |
3,700+ hours of perception video data | |
1 billion synthetic demos for dexterous-hand tasks | |
2000H tele-operated sidewalk-robot driving data from 10+ cities |
Furthermore, because the field is still emerging and access to capable robots remains limited, there may be only a few thousand people worldwide who truly know how to prepare and use complex robotics datasets effectively.
Two Approaches to Robotics Data
A. Simulations

Source: Dr Jim Fan’s presentation at Sequoia AI Ascent
The core idea is this:
If a robot has handled 1,000,000 different environments, the odds are that it'll do just fine in the 1,000,001st environment too.
Simulation is how robots learn without breaking. It’s the only place a machine can fall a thousand times and still get up. In simulations, robots can train faster than real-time, encounter rare or dangerous edge cases, and explore movements that are too slow, risky, or expensive to test on physical hardware.
Simulation gives us a way to multiply scarce real-world data. A single demonstration can be replayed across N environments and M motion variations, generating N × M new examples.
As neural world models and simulators improve, a new kind of scaling law is emerging, one where physical IQ rises with compute used. More compute = more capable policies and smarter robots.
That’s how we scale.

Source: Dr Jim Fan’s presentation at Sequoia AI Ascent
But simulation has limits. It works well for drones and basic locomotion, where the physics are relatively simple. Manipulation is harder. Friction, contact dynamics, and fine-grained sensing are tough to model accurately.
And the biggest challenge is translating success in simulation to actual performance in the real world: what we call the “sim-to-real gap”.
The problem lies at the intersection of the two Ps: Physics and Perception.
Even the best simulators simplify contact, friction, and deformation. Surfaces behave too predictably. Visual scenes lack the messiness of real light, texture, and sensor noise. A policy that performs well in simulation might stumble on a real kitchen floor because the grip is off, or the glare tricks a camera.
To bridge this gap, researchers rely on techniques like domain randomization (training across varied, slightly distorted conditions to encourage robustness) and domain adaptation (making simulated inputs look more like real ones). Some close the loop entirely: train in sim, deploy in the real world, capture failures, retrain, repeat.
Closing the gap takes more than one fix. It takes higher-fidelity physics, aggressive visual randomization, small but well-targeted real-world data, and continual adaptation.
Simulation will never replace reality. But it’s how we get there faster.
B. Real World Data
To operate reliably, physical AI needs diverse real-world data that captures edge cases and unpredictability. None of which simulators model well.
There are three types of real-world data that matter most. Each captures a different slice of the problem.

1. Multimodal Sensor Data (Interaction + Perception)
For a robot to act with precision, it needs to know two things: where it is, and what’s around it. That means combining signals from its own body with signals from the outside world.
Proprioception (Internal State):
This is the robot’s sense of its own body. Joint angles, motor speeds, torque, and acceleration are all logged in real time.
Encoders and potentiometers track movement at every joint.
Inertia Measurement Units pick up shifts in balance and motion.
Force sensors detect whether a grip is holding or slipping, whether a footstep landed or failed.
Exteroception (External Perception):
This is how the robot sees and touches the world.
Cameras provide rich visual information for object recognition, while LiDAR captures the 3D structure of the environment, enabling navigation and obstacle avoidance.
Tactile sensors on fingers and palms measure contact, pressure, and texture. Recent advancements, such as the F-TAC Hand, have achieved near-human levels of sensitivity, with a spatial resolution of 0.1 mm over 70% of the hand's surface.
Microphones add another layer: the sound of glass clinking or a door closing can signal events outside the field of view.
Each sensor fills in gaps left by the others. Together, they give the robot context. Multimodal fusion, where we layer different data types into a single working model, enables the robot to decide what to do next.
2. Human Demonstration and Teleoperation
The fastest way to teach a robot a new skill is still the oldest: show it.
Imitation learning (LfD), where robots learn by observing humans, is far more efficient than trial-and-error reinforcement learning. Instead of discovering the rules from scratch, the robot starts with an example.
Recent research shows that well-constructed demonstration datasets, together with supervised or hybrid algorithms, reach competent policies in just 10s to 100s of real-world episodes, while pure reinforcement learning (RL) starting from scratch with no demonstrations often needs 10,000 - 1,000,000+ simulations and arduous reality-transfer.
Teleoperation is the most common method of demonstration. A human controls the robot remotely, creating high-quality motion data. Interfaces vary (VR headsets, and motion-capture suits, etc) but the idea is the same: people perform the task, robots watch and learn. Teleoperation is the easiest way to bootstrap a robot to a non-zero chance of success at a task, and then you improve from there.
Kinesthetic teaching takes a more direct route. Instead of remote control, the human physically moves the robot’s limbs through the motion.

Tesla’s teleoperation team. Source: Electrek
Some teams are scaling this with crowdsourcing.
Reborn sells an affordable Rebocap motion-capture suit (8 000 units shipped so far) and pairs it with VR titles that label every gesture in real time. Around 200 000 people now play these games each month, converting their motions into high-quality training data.
NRN Agents uses a web-based simulator. Players guide robots through tasks using simple controls, creating useful trajectories with no special gear.
Tesla is hiring operators to wear capture suits and act out specific behaviors for its Optimus robot. Human motion is streamed straight into training pipelines.
Despite its power, LfD faces significant hurdles. The cost of high-quality teleoperation hardware (VR + mocap suits are dropping below $1K, but high-precision kinesthetic rigs still cost $10K+) and the logistical challenges of data collection are substantial.
Furthermore, the "correspondence problem", where we map movements from a human body to a robot with different physical characteristics, remains a key area of research.
3. Annotated Video with Kinematic Labels
It’s easy to assume that robots can learn from YouTube. After all, the internet is packed with footage of people doing things: chopping vegetables, folding laundry, fixing bikes. But most of it is useless.
Why? Because video alone doesn’t tell a robot how something was done. It needs to know the location of every joint and tool in 3D over time.
Researchers are closing this gap by turning raw footage into structured training data.
Some projects use motion trackers or augmented reality markers to embed kinematic data into video as it’s recorded. Instead of just watching pixels, robots see positions, rotations, and trajectories.
At the University of Washington, the Unified World Models project learns from both labeled robot actions and unlabeled video clips by learning representations and inferring likely actions from videos. In simulations, UWM outperforms standard imitation learning models.
Another tool, URDFormer, takes a single image and reconstructs an entire simulation-ready scene.

Source: URDFormer
The most effective strategies combine both simulation and real-world data. Simulation offers scale: billions of interactions, fast iteration, and risk-free error. But it’s the real world that keeps models grounded. It exposes edge cases and breaks assumptions that never show up in sim.
In practice, robotics will probably follow a similar path to what Waymo has done with self-driving: billions of simulated miles, but also millions on real roads. Imagine a distributed fleet of physical robots, each collecting data and feeding it back into a shared model. Even partial autonomy, spread across enough machines, becomes a powerful flywheel for learning.
🗺️ World Models & Robots
For robots to operate safely and efficiently outside the lab, reacting isn’t enough. They need to anticipate. That’s where world models come in. These are learned simulators that let a robot imagine the consequences of its actions before committing to them.
The world model predicts outcomes (e.g., “if I push the cup, it might spill”).
The policy uses those predictions to select the best action (e.g., “lift the cup instead”).
Unlike traditional simulators that rely on hand-coded physics, world models are trained from experience. Given the robot’s current state and a proposed action, the model predicts the most likely next state. There’s no need to specify every contact force or friction coefficient. The system learns dynamics implicitly from data.
Once trained, a good world model can simulate thousands of possible futures quickly and safely. That means faster learning and fewer broken parts (always a good thing).
Google’s Dreamer showed how world models could actually be feasible. It learned control policies using just hours of real-world interaction. The key was to compress sensor inputs into a latent space, predict forward in that compressed form, and decode only when needed. This made the process far more efficient.
World models are branching into three broad directions:
Latent-only models (Dreamer, RoboDreamer) operate entirely in compressed representation space. They’re fast and scalable, and they work well in clean environments. But they still struggle with messy physics.
Neural-physics hybrids aim to fix that by adding some structure. These models combine learned patterns with known physics, like rigid body dynamics or momentum. That helps with tasks involving contact, like picking up or pushing objects. But they still have a hard time with anything soft, squishy, or unpredictable.
Dynamic digital twins take a more grounded approach. They treat the model as a living system that constantly updates based on real sensor data. This makes predictions more accurate in the moment, especially in stable environments. But it comes at a cost. These systems need a lot of sensing and compute to stay up to date.
Meanwhile, vision models are moving in from a different angle. Generative video systems like Sora, Pika, Runway don’t understand physics (i.e no real grasp of friction, mass, or contact) but they can guess what a plausible next frame looks like.
Researchers have started conditioning these models on actions, asking them to predict visual futures. The results often look convincing, but objects still float, vanish, or behave unrealistically. Even so, they’re proving useful for training perception systems, modeling rare edge cases, and generating synthetic data.
But big challenges remain. We don’t yet know how to evaluate contact-aware predictions, plan under uncertainty, or generalize dynamics across different robot bodies.
Part II: Crypto and Robots
We’ve gotten this far without talking about crypto. And we could go much further without needing to.
But if you’re wondering where blockchain might actually matter in robotics, here’s where it starts to get interesting.
There are two key areas, in our view.
#1: Coordinating and accelerating data generation at scale.
We’ve established that robotics needs vast amounts of diverse, high-quality data. Every new task or context demands slow, expensive training.
Crypto offers one possible way to scale: align incentives. Instead of relying solely on labs or companies to collect data, we can reward users for operating robots or contributing useful training trajectories. Concepts like “drive-to-earn” or “clean-to-earn” are early experiments in this space. If these loops prove reliable, they could enable distributed fleets—small robots delivering packages, monitoring crops, or cleaning public spaces—while passively collecting data to improve shared models.
This fits into a broader shift we wrote about in our piece on Data Networks a few weeks ago:
“Selling raw data is a lower-margin business and quickly becomes commoditized. The real leverage comes from building on top of it (apps, models)”
#2: Governance
The idea of “open, safe, governable robots” is very powerful. We’ve seen so many Hollywood movies of how robots fight with humanity and try to take over.
As robots take on more responsibility, the question becomes: Who decides what they’re allowed to do?
One answer is on-chain governance. Blockchains can provide a transparent, tamper-resistant source of truth for robotic policies. Imagine a shared registry of behavioral rules (e.g. always maintaining a 1m distance from humans) published on-chain for anyone to audit or propose changes to. This shift in governance from proprietary codebases to open protocols reduces fragmentation and black-box behavior.
Startups like OpenMind are exploring this vision: something like an Android-for-robots stack, but governed collectively. Ethereum proposals such as ERC-7777 represent early moves toward encoding interaction standards directly into contracts.
Security is part of this story too. With cryptographic access control and logging, robots can be programmed to verify instructions, reject unauthorized commands, and execute tasks only if permissions check out. Every action leaves a trace. That kind of transparency makes coordination safer, and failure modes easier to catch.
Looking further ahead, coordination on the blockchain becomes very interesting.
Robots need to connect and coordinate when:
No single robot has all the information
Multiple robots must work together in real-time
Resources (skills, models, sensors) are shared or monetized
Trust and accountability need to scale
Today, such coordination often relies on centralized services. Blockchains offer a neutral layer for agents to interact, transact, and verify actions without needing a central authority. This could enable dense networks of autonomous systems (drones, delivery bots, warehouse fleets) communicating and adjusting behavior based on shared state.
Each robot would have a verifiable identity. Micro-payments could allow robots to pay for electricity, data, or access to models. Smart contracts would let them trade services and enforce agreements on the fly. If done right, these systems could self-organize at scale
Emerging Startups to Watch
Several startups are bridging the worlds of robotics and blockchain, putting real products and pilot programs into the field.

1. Frodobots & BitRobot
If you want a glimpse of what happens when crypto incentives + gamification meet real-world robotics, Frodobots is a good place to start. The setup is simple: small four-wheeled robots deployed across city sidewalks, controlled remotely by users around the world. In exchange for driving, users earn tokens.
Its flagship product, Earth Rovers, is a real-world scavenger hunt game where players control sidewalk robots via a browser interface. These playful teleoperation sessions produce datasets (e.g., a 2,000-hour open-source urban driving dataset) that feed into AI training. FrodoBots has deployed hundreds of robots across cities worldwide, with upcoming releases like Octo Arms, a robotic arm puzzle game, to expand data coverage.

The same team has also raised $8M in funding to build BitRobot on Solana, a “Bittensor for Robotics”. Each subnet will operate as an open challenge. Contributors submit models or data, and are rewarded tokens to encourage participation and continual improvement.
Read our deep dive on FrodoBots and BitRobot here.
2. Reborn Network
Reborn Network, founded in 2022 by Luffy Yu and a team of researchers from Cornell, UC Berkeley, and Harvard, is building a decentralized platform to train general-purpose humanoid robots using crowdsourced human motion data. Headquartered in Hong Kong, the company’s vision is to democratize robotics through a community-driven AI model economy: “AGI robots of the People, by the People, for the People.”

ReboCap retails at US$200
At the center of the platform is ReboCap™, a low-cost motion capture wearable that lets users record and contribute real-world human movement data. Over 8,000 units have shipped, supporting a user base of 200,000. These contributions feed into Robotic Foundation Models (RFMs).
Supporting tools include Reboverse, a simulation and training environment, and a model zoo where developers can deploy or remix trained robot behaviors. The system is designed to be tokenized. Contributors will eventually earn Reborn tokens for supplying motion data, while robotics companies will use the token to access models or commission training, creating a two-sided incentive loop tied to on-chain participation.
Reborn is already piloting its models in the field through partnerships with firms like Unitree, Swiss-Mile, and Agile Robots, targeting use cases in warehousing, mobility, and healthcare. With China’s humanoid market rapidly expanding, the company is well-positioned for scale.
Still pre-token, Reborn has built meaningful traction through product adoption and a growing developer ecosystem. It’s aiming to become the open data layer for physical AI.
3. PrismaX
PrismaX is a San Francisco–based robotics infrastructure startup founded by AI roboticist Bayley Wang and blockchain developer Chyna Qu. Backed by a16z Crypto, Stanford Blockchain Fund, and Symbolic Capital, the company raised $11 million in seed funding to build a decentralized data and control layer for real-world robotics.
The PrismaX platform operates across three layers:
Data Protocols that validate visual and sensor contributions and reward participants.
Teleoperation Network that coordinates human operators on demand, paid through smart contracts.
AI Foundation Models trained on this incoming data to gradually reduce reliance on human control.
Together, these components form a feedback loop. Teleoperators generate labeled interaction data, which improves model performance, which in turn reduces the need for human input.
PrismaX’s token-based system allows data contributors to retain economic rights in the AI models their data helps train. This model resembles a “Data DAO” for robotics, aligning incentives between users and developers.
The platform also offers plug-and-play teleoperation tools, making it easier for smaller robotics firms to integrate remote control and expand deployments.
Target markets include warehouse automation, autonomous vehicles, drone systems, and robotic services in healthcare and hospitality. PrismaX is particularly attractive to mid-sized robotics companies and certified gig workers, opening up new revenue paths for remote operators.
4. OpenMind
OpenMind, founded in 2024 by Stanford professor Jan Liphardt, is developing a decentralized operating system and trust layer for autonomous robots. Based in San Francisco, the team introduced its platform at the 2025 Coinbase AI Hackathon.
The platform centers on two components:
OM1, an open-source agent framework that integrates modular AI systems—perception, planning, and language models for natural interaction.
FABRIC, a blockchain-based coordination layer that enables secure, auditable collaboration among machines through smart contracts.
Together, these tools allow autonomous systems to form teams, negotiate tasks, and operate under shared, transparent rules.
OpenMind is focused on alignment and accountability. By encoding robot behavior, permissions, and policies on-chain, the system lets developers and users audit what a robot is allowed to do, and why. This is especially relevant in sensitive domains like defense, eldercare, and public services, where traceability matters.
The platform also supports cross-system communication, for example, linking vehicles and service robots through data-sharing networks like DIMO.
OpenMind is starting with open-source tools and plans to monetize through enterprise services, infrastructure for FABRIC, and potentially a native token to support machine-to-machine transactions.
If adopted widely, OpenMind could become a protocol layer for autonomous cooperation, establishing shared behavioral standards across industries and geographies.
5. NRN Agent Robotics
NRN Robotics, built by Toronto-based ArenaX Labs, combines Web3, AI, and robotics into a competitive platform for training and deploying intelligent agents. Founded in 2021 by Brandon Da Silva and Wei Xie, ArenaX first launched AI Arena, an NFT-based fighting game where users train agents through imitation and reinforcement learning. Backed by $11 million from investors including Paradigm and Framework Ventures, the startup has since expanded into physical robotics.
The premise is to make AI training a public, participatory process. Something closer to a sport. Players train agents through gameplay. Those agents become crypto-native assets, and the most successful ones can transfer their skills to the real world.
Using NRN’s SDK, developers collect real-world data, simulate behavior, and port policies into physical systems via sim-to-real workflows. Their RME-1 robotic arm demo showed that policies learned through browser games can be deployed on real hardware.
NRN robotics is now building out physical competitions (drone races, humanoid battles), each backed by crypto incentives. The result is a decentralized training pipeline that evolves physical agents in the open.
Conclusion
I repeat, the robots are coming.
The real thorny challenges will be social and economic. Will people accept robots in homes, workplaces, and public spaces? And can companies find business models that justify the high upfront costs?
Right now, humanoid robots feel a lot like electric vehicles did in 2013. Expensive, limited in capability, and still niche. The same cycle is likely to play out. Costs will fall. Performance will improve faster than expected. And at some point, a product will arrive that captures public imagination: a “ChatGPT moment” that triggers a rapid shift in adoption.
Robotics and crypto might seem like strange partners, but their intersection is starting to feel less speculative and more inevitable. Robots need better data, new funding models, and trustless coordination. Crypto offers tools for all three: global incentives, shared ownership, and programmable governance. The first experiments are already underway.
Crypto won’t “fix” robotics. But it might just accelerate the future we’re building.
Cheers,
Teng Yan
A few good reads that were helpful in our research
Jim Fan and the physical turning test (Sequoia summit)
This essay is intended solely for educational purposes and does not constitute financial advice. It is not an endorsement to buy or sell assets or make financial decisions. Always conduct your own research and exercise caution when making investments.