A cyberpunk Shiba Inu bartender, neon visor, mixing drinks

That’s the entire prompt. Ten seconds later, it appears in 3D in your Unity scene, complete with lighting, texture, and depth.

I’m not a designer, but I can imagine that this feels almost like sorcery. This is 404–Gen, a decentralized network turning plain language into 3D assets.

“A cyberpunk Shiba Inu bartender, neon visor, mixing drinks” by 404-gen

Every digital world runs into the same problem: 3D content is slow to make. Every hero needs a sword, and every level needs a thousand props. In big studios, the inside joke is that some designer is just shipping barrels & crates all year round (probably true).

Studios like Ubisoft and Rockstar throw armies of designers to model barrels and crates. Hobbyists barely stand a chance. If you cannot model, you cannot build.

404–Gen runs as Subnet 17 on Bittensor, a decentralized AI protocol. It is not one model but a swarm, competing in real time. Contributors plug in different approaches, validators score results, and tokens flow to the winners. Weak models vanish, strong ones rise. The network has already produced more than 20 million outputs.

The mission is clear: put 3D creation in the hands of anyone with an idea and a keyboard. Not just game developers or VFX teams, but solo builders, hobbyists, and tinkerers.

Basically…everyone.

From Atlas to 404–GEN

The idea traces back to its founder, Ben James, and a venture called Atlas.

Ben James trained as an architect, but he always thought more like a systems engineer. His early work focused on shaping virtual spaces: 3D cities, complex interiors, immersive landscapes. The creative tools he had were powerful, but his bottleneck was scale. Even the best artists could only produce assets so quickly.

In 2023, Ben launched Atlas, a Web 2 platform designed to help studios like EA, Ubisoft, and Rockstar accelerate asset creation. The approach was straightforward. Use AI to generate the raw material. AI could create 20,000 unique buildings in the time it previously took to craft 100. Studios used it to render entire cities. It worked.

From LinkedIN

But inside those pipelines, Ben noticed something strange. The research was accelerating, but progress felt fragmented. Dozens of experimental models were floating around with “no system to reward progress or organize the frontier.” Innovation was happening in silos. Atlas served its purpose for large teams, but Ben began to see a bigger opportunity.

That shift came into focus when he discovered Bittensor, a protocol for decentralized AI. It lets anyone create a specialized subnet, a dedicated network for training or serving models. Ben’s idea was to reward whoever builds the best model for text-to-3D generation. By making 404–Gen a Bittensor subnet, they could tap into decentralized computing power and a global pool of AI talent – all competing (and collaborating) to improve text-to-3D generation.

In 2024, Ben and his team launched 404–Gen as Subnet 17. They took the core technology behind Atlas and opened it up to the world. Anyone can contribute a model. Anyone can validate outputs. The network measures performance and distributes token rewards based on results. The protocol does not care about reputation or brand. It only cares about output quality.

Underwater scene generated entirely by 404-Gen

Ben now leads a focused team of 10+ researchers and engineers with deep roots in 3D graphics and AI. Atlas continues in parallel, serving enterprise users and backed by $6 million in venture funding (underscoring how much investors believe in AI-driven 3D creation).

But 404–Gen is Ben’s longer play. It reimagines a 3D content company as a decentralized protocol built for speed, competition, and scale.

How 404-Gen Works: Text Prompts → 3D “Splats”

So.. what actually happens when you type a prompt into 404–Gen?

Say you enter: an old oak tree with mossy roots. Behind the scenes, a swarm of AI models races to bring that tree to life. These models are trained on massive datasets of 3D geometry, textures, and scenes. Their job is to synthesize a shape that matches your description. And do it fast.

404–Gen doesn’t rely on a single model. It runs as a model-agnostic marketplace, where multiple approaches compete on every prompt. One participant might use a neural radiance field. Another might rely on diffusion-generated point clouds. Someone else might pipe the text through a mesh generator.

This multi-model approach keeps the network at the cutting edge. If a new academic breakthrough comes out, someone in the 404–Gen community can integrate it and compete with it.

To understand why this matters, it helps to break down how 3D content is represented. There are three main approaches.

Meshes are the traditional backbone of 3D graphics. Objects are built from polygonal surfaces (usually triangles), manually shaped by artists or generated with procedural tools. Meshes are precise and game-ready because engines like Unity and Unreal expect them. But they’re hard to scale. Each new shape often means starting from scratch.

NeRFs (Neural Radiance Fields) emerged from academic labs around 2020. Instead of polygons, NeRFs use neural networks to learn how a scene looks from different viewpoints. They produce gorgeous, photorealistic renderings of static scenes (like turning a bunch of photos into a 3D flythrough).

However, they’re computationally heavy and hard to make interactive. A NeRF doesn’t produce a usable mesh that you can easily drop into a game engine.

Then came 3D Gaussian Splatting.

This technique emerged in 2023 and quickly changed the game. Inside 404–Gen, it became the default method for fast, high-quality 3D generation.

Instead of building objects from triangles or voxel grids, splatting represents a scene as a dense cloud of tiny, translucent ellipsoids. Each “splat” carries position, color, size, and opacity. Render them together, and they blend into a smooth 3D image. Think pointillism, but applied to space.

Unlike NeRFs, splats can run much closer to real-time. And unlike meshes, they don’t require tedious manual construction.

From a user’s perspective, the benefits are immediate. Type a prompt, wait under a minute, and see a textured 3D object with realistic lighting and form. It may not be ready for final export, but it’s good enough to explore ideas, tweak direction, and keep moving.

This is the core advantage: splats offer visual quality close to NeRFs with performance that feels native to game engines. They’re fast and expressive.

Quarterly trends of research papers on key 3D modeling methods from 2019–2025

Game engines like Unity and Unreal still treat splats as outsiders. They need plugins or conversion tools to work smoothly. The 404–Gen team is building a “splat to mesh” converter to bridge the gap. Until then, splats are best used for prototyping.

Even when the asset isn’t final, the process is a win. Concept artists, game devs, and worldbuilders get a feedback loop that’s tighter, cheaper, and way more fun to use.

The Tournament Engine

404–Gen doesn’t just generate 3D models. It runs a tournament.

Every prompt (“medieval wooden barrel,” “sci-fi blaster,” “bonsai tree”) launches a contest among miners. Each miner is a node running a text-to-3D model. Performance is tracked through a live Glicko2 ranking system, similar to the chess Elo rating system.

404-gen Miner’s Dashboard

Strong miners rise up the leaderboard and win more rewards (since rewards are a function of Glicko2 score x Submitted Results). Weak miners drift down and see fewer. Each request is a match that carries status, tokens, and proof that a model belongs in the network.

When a miner makes a submission, it generates the asset—often a .ply file of Gaussian splats, each carrying position, color, size, and opacity. Plugins for Unity and Blender render the result instantly. Within about a minute, the barrel appears on-screen, textured and lit.

The user can then modify the result if needed. 404–Gen supports “cutouts” – for example, you can mask out parts of the generated model you don’t like by drawing a box or ellipsoid around them. This gives some creative control to remove unwanted bits without re-generating from scratch. Users can also retry with an adjusted prompt or style prompt to fine-tune the output’s appearance (e.g. “make it a low-poly style barrel” etc.).

This structure pushes constant experimentation. If DeepMind drops a new paper, someone can implement it, deploy it on Subnet 17, and immediately see how it fares in the open arena. No one cares about brand or reputation. Only outputs that win direct comparisons earn rewards.

Each output doubles as a benchmark. Validators check results, update rankings, and distribute rewards in subnet tokens with fixed supply. This cycle runs thousands of times each day, and the dataset grows with every prompt.

What makes the engine flexible is that it accepts any method. Some miners generate a 3D point cloud through diffusion, then convert it into splats. Others train a NeRF on the fly for each prompt, using a text-to-image model as a teacher. Many now rely on Gaussian splatting for speed and fidelity. The system does not enforce a single technique. It rewards whichever one delivers the best result.

By combining these steps, 404–Gen turns text prompts into 3D assets through a self-improving, decentralized assembly line. Over time, the assets generated by the network should get better and more varied, because the network learns which models work best for which kinds of prompts.

In other words, 404–Gen is a “mining” economy for 3D AI.

Workflow diagram for distributed 3D asset generation. Source: 404–Gen Whitepaper

What 404–Gen Has Achieved So Far

A little over a year in, 404–Gen has already delivered concrete products. It shipped a Unity plugin, verified on the Unity Asset Store and the first blockchain-based 3D generator available there. It also released a Blender add-on, a web app, and a Discord bot so creators can use the system inside the tools they already prefer.

404-Gen claims a dataset of 21.5 million+ AI-generated 3D models, ~40 TB of content, all with prompts, metadata, and contributor attribution. They released a mini-subset (20,000 assets) publicly on Huggingface.

In my opinion, 404-Gen’s dataset is the backbone of its competitive edge. It fuels model improvement, powers UX speed, and gives it a lead that rivals will need months or years to match. In text-to-3D today, access to an open and diverse dataset is rare. For 404–Gen, it provides real weight in three arenas: training new architectures, supporting research, and delivering creator tools that scale.

Independent developers have used 404-Gen to build complete game scenes, often without prior 3D experience. The thesis is proving out: 404–Gen lowers the barrier so anyone can participate in 3D creation.

The team has also introduced 2D-to-3D generation. Users can input sketches or reference images and receive AI-generated 3D models in return. The feature is currently live at the code level for miners and will be integrated into the creator workflow in a future release.

Does 404-Gen Have A Chance? Market Position and Competitors

For years, text-to-3D has been treated as a holy grail for digital content. Whoever builds a tool that is fast, usable, and integrated could power the asset supply chain for everything from games to AR/VR to virtual worlds.

404–Gen is among the boldest contenders. It is running in a field crowded with labs, startups, and giants trying their own angles.

Big Tech & AI Labs have been experimenting for years. Google’s DreamFusion showed you can generate 3D models from text using NeRF-style optimization, but each scene often takes hours. NVIDIA’s Magic3D improved on that, making higher resolution meshes more quickly, but still not at real-time speed. GET3D produces explicit textured meshes ready for downstream use. Meta, Adobe, and others are building datasets, licensing models, and shaping tools. Many of those efforts remain in research or limited beta form. To date, few are tools that creators can drop into their Unity or Blender workflow and use in real time.

Game engine makers could shift the game entirely. Unity has moved toward AI-assisted tools (Muse among them) and has acquired VFX-level tech. Unreal has huge asset libraries and strong graphics pipelines. If Unity or Epic integrates text-to-3D generation deeply, they could reshape how assets are sourced. But integration, performance, and open licensing are still hurdles. In that gap, 404-Gen currently holds a lead.

Among startups, the focus has been on narrow slices of the problem. Luma AI makes it easy to capture real objects with a phone and convert them into NeRF-based 3D scans. Kaedim turns 2D images into 3D models, with human touch-ups layered in for polish. Blockade Labs specializes in generating panoramic skyboxes from text prompts. Lovelace Studios (Anything World) has played with voice- or text-driven creature generation.

These players are interesting, but they’re mostly narrow. None yet offers the broad, end-to-end platform that 404–Gen has already put in front of users - with the possible exception Luma which is moving fast.

404–Gen’s edge is simple. It is live, usable today, with a community generating millions of assets. It aims to offer fast turnaround, multiple model types competing, open licensing, and integrations where creators already are.

In a space heavy on research papers and limited demos, 404–Gen is shipping tools people can use.

The Obstacles Ahead

Scaling 404–Gen is not guaranteed. The biggest hurdle is quality. AI-generated assets swing between impressive and broken. A prompt might deliver a perfect medieval barrel, or it might spit out warped geometry and floating shards. Early image generators had the same growing pains with faces. Three-dimensional content multiplies the difficulty.

Validation filters help, but they cannot guarantee consistency across styles. A realistic tank and a cartoon dragon demand different model families. Reliability across that spectrum is what users will eventually expect.

The second challenge is workflow integration. Splats are ideal for rapid prototyping, but production pipelines need clean meshes with textures, LODs, and collision models. Today, 404–Gen produces dense point clouds—fast to view, but not game-ready. A splat-to-mesh converter is in development, and until it matures, manual cleanup remains a bottleneck. That gap leaves room for competitors with more polished pipelines to step in.

Then there is competition itself. If Unity or Epic rolls out native text-to-3D, many developers will default to it out of convenience. Big Tech has the user base, distribution, and capital to flood the field once it decides the timing is right. 404–Gen’s best defense is speed, openness, and the collective weight of its community. These are advantages that centralized rivals cannot easily replicate.

Finally, generative AI still lives in a grey zone. 404–Gen has taken the cautious route: open-source models, licensed data, its own dataset of known origin, and metadata on every output. That provides provenance and defensibility. Yet no system is perfect. If copyrighted material slips into training, outputs could echo protected works. Staying vigilant as standards evolve will be essential.

The Token & The Bet

SN17 token price (Taostats.io)

404-Gen already has a live, tradable token, part of Bittensor’s dTAO economy. At the time of writing, the token is trading at $4.36 with a market cap of $11 million and a fully diluted valuation of $91 million. It is ranked #18 among Bittensor subnets by market cap.

I’m first to admit that the current price chart doesn’t inspire great confidence. Momentum is weak, and that is typical of a token with continual inflation and limited demand. It reads as apathy.

Yet this is often where I believe the most interesting opportunities hide: protocols that keep shipping while investors look elsewhere. The catch is time frame. Anyone buying has to treat it as a venture-style bet, not a quick trade. Its backing by Unsupervised Capital adds some credibility.

That valuation rests on real assets: a vast open 3D dataset, verified plugins for Unity and Blender, and creators already building scenes. The team is shipping usable tools at a steady clip. What is missing is a clear revenue model.

404–Gen hasn’t turned on monetization yet. The play I believe they will make is this: give away free generations to build a community, lock into creator workflows, and only introduce paid tiers once network effects make switching costly.

Midjourney’s Insane Revenue Growth. Source: GetLatka

Midjourney proved that this business model works, going from zero to $50M in revenue in year one of charging, with estimates of a whopping $500M by end-2025.

(Meanwhile, I’ve just been using Midjourney to generate anime girls…)

404–Gen has the chance to go even bigger. 3D assets carry higher compute costs, more professional utility, and more willingness to pay than flat images. Layer on revenue sharing with miners and you get a system that compounds adoption while spinning off cash.

For valuation comparison (not apples-to-apples, more apples-to-pears): Luma AI, riding on the narrative of video generation and world models, is chasing a $3.2 billion round on roughly $8 million in revenue. Kaedim converts 2D art into 3D models and is reportedly generating $9.5 million in annual revenue. Both are Web2 startups.

Company

Valuation / FDV

Revenue (est.)

Focus

404–Gen (Bittensor)

$11M market cap / $91M FDV

N/A

Decentralized text-to-3D, Unity/Blender plugins

Luma AI

~$3.2B (in fundraising talks)

~$8M ARR

Generative video, multimodal “world models”

Kaedim

~$50M

~$9.5M ARR

2D-to-3D conversion with human QC

Luma shows how venture markets are pricing “AI for media creation” bets, a signal of how high investors will bid up a strong narrative in generative content. Kaedim demonstrates how it is possible to build a substantial revenue stream quite quickly in this space. 404–Gen could be the “undervalued open play” here.

The trajectory matters. If adoption grows, if mesh export closes the workflow gap, if studios start treating 404–Gen as the default text-to-3D pipeline, today’s $90M FDV (and $11M market cap) will look extremely conservative within a year. The market appears to be pricing in partial success, but not the full upside.

The leap from prototype to production is non-trivial, however. Assets must move from splats to meshes, pipelines need polish, and engine workflows demand reliability. Competition looms too. If Unity, Epic, or a major lab releases a polished text-to-3D tool, the landscape could shift quickly.

In short, it is exactly what you’d expect from an early protocol: high risk, high reward.

The Road Toward AI-Native Virtual Worlds

In the near term, 404–Gen’s job is refinement. The core proof (text to 3D on demand) is already there. Now the focus shifts to polish. A cleaner web app, searchable access to the 21-million-asset library, more plugins for Unity and Blender, and mesh exports that make outputs game-ready. Expect fast iteration here, along with decentralized storage so assets can be fetched peer-to-peer instead of relying on central servers.

The medium horizon looks different. Instead of generating single objects, 404–Gen wants to ship entire kits.

Imagine a complete medieval RPG pack: castles, weapons, dragons, and terrain, ready to slot into a prototype. Or a neon-lit cyberpunk set for a new city build. Themed bundles cut friction for developers, and community game jams built entirely on 404–Gen assets could showcase what the platform makes possible.

The long vision is more radical: AI-native worlds generated in real time. You describe a survival game on a tropical island with ancient ruins. AI spins up terrain, objects from 404–Gen, characters, dialogue, audio, even music, drawing from other Bittensor subnets as needed. It sounds ambitious, but early prototypes in AI gaming suggest it is not out of reach. In this frame, 404–Gen becomes less a tool and more the 3D backbone of a larger AI operating system for virtual worlds.

To win, 404–Gen must lean on what sets it apart: openness, decentralization, and speed. The next few years will decide if it evolves from impressive demo to indispensable infrastructure.

Key Metrics to Watch:

  • User growth & engagement – Are more creators and miners joining, and is asset generation growing month by month?

  • Output quality – Are models improving enough for real-world adoption?

  • Partnerships – Does 404–Gen secure deeper ties with studios and toolmakers?

  • Token health – Does SN17 sustain value and incentivize miners long-term?

Conclusion

404–Gen is easy to dismiss. Many still hold a poor opinion of Bittensor and its subnets, believing that they are mostly fluff.

But 404 has already built what others only promise: a live decentralized system that turns language into 3D objects at scale. The dataset is enormous, the tools already work, and the incentive loop keeps driving the models forward.

The risks are clear. Output quality, workflow integration, and competition from giants could stall its momentum. The reward is harder to overlook. If it succeeds, worldbuilding shifts from hours of modeling to minutes of conversation.

That is the wager: anyone with an idea and a keyboard (yes, that’s you and me) can build entire 3D worlds.

Cheers,

Teng Yan & Avu

This is an independent report from Chain of Thought. We did not receive any payment or sponsorship from 404-Gen for writing this.

This essay is intended solely for educational purposes and does not constitute financial advice. It is not an endorsement to buy or sell assets or make financial decisions. Always conduct your own research and exercise caution when making investments.

Reply

or to participate

Keep Reading

No posts found