Opinion: Decentralised AI will save humanity

The stakes are high & our window of opportunity is small

Marc Andreessen says AI will save the world.

Let me add to this: Decentralised AI will save humanity.

It may not be apparent now, but we will appreciate this soon.

AI is a centralising technology

Eric Wall did his research and calls OpenAI an “untrustworthy organization”:

Let’s consider why we are in this state today.

The race to develop advanced intelligence is an arms race for compute power.

Access to top-tier GPUs is the biggest bottleneck in developing AI intelligence at any single organisation. As discussed in last week’s newsletter, building large foundational AI models isn’t as simple as renting a bunch of GPUs.

Often, you must construct your own data centres, which involve high-speed networking, customized data storage, stringent privacy considerations, and efficiency optimization. Cloud GPU rental solutions just can’t match these capabilities.

It’s no surprise, then, that tech giants like Microsoft, Google, and OpenAI are leading the charge. Smaller players lack the resources to compete at this level.

More worryingly, though, we’re entering a new paradigm where the performance gap between proprietary and open-source AI models is quickly widening.

In the pre-generative AI era, AI researchers frequently published their findings as academic papers, contributing to a collective pool of knowledge. There was little reason to keep discoveries under wraps.

Case in point: “Attention is all you need”—the landmark Transformer architecture paper that eventually led to chatGPT—was published openly in 2017 by Google scientists.

Today, frontier AI research is being conducted behind closed doors in the top AI labs, and important breakthroughs are kept secret. There are substantial commercial interests at stake, especially because they have investors who need to see a return on their capital invested.

The incentives have flipped.

A Dark Future

Generated by DALL-E

Imagine a world where AI is controlled by Big Tech. In this Orwellian dystopia:

  • AI will always remain a black box. This lack of transparency is alarming, especially because we will use these systems to make decisions that heavily impact our lives. Trustless verifiability is crucial in high-stakes fields like healthcare1 .

  • Our minds get manipulated. The entities that own AI will be tempted to use AI to serve their agendas. The potential for AI misuse in shaping public opinion, manipulating markets, or swaying political outcomes is very real2.

  • Censorship becomes the norm. Social media platforms like X and TikTok function more like editorialized content platforms, where (political) views can be emphasized or suppressed. But the open internet means anyone can still spin up a website and write whatever they want. It’s different when everything we use in the future is an AI app that is easily filtered.

  • We no longer own our data.  Instead, we resign ourselves to the reality that our data is routinely harvested to feed large, centralized AI models without consent or fair compensation. Governments and those in power will go to great lengths to maintain their dominance, including invading our privacy.

Living in a world where our data and personal AI are not under our control is deeply unsettling.

If left unchecked, our society risks becoming overly dependent on a few powerful, monopolistic AI systems. We become mentally enslaved.

What’s The Alternative?

We need a counterbalance to the centralizing force of AI. We have a small window to shape the post-AI world we aspire to — one that is democratic, open, and fair.

Enter Crypto.

With crypto, we have a shot at upholding these key tenets:

  1. Decentralized Control: Decision-making and control are distributed across a network, governed by code, removing power from any single entity.

  2. User Empowerment: Users maintain ownership over their assets and data.

  3. Censorship-Resistance: No one can wield the power to censor whatever they want.

Many argue that Crypto x AI startups are vaporware, scams, or lack real use cases, and only introduce additional friction points.

Some of this criticism holds water.

But let me ask you this: What is the alternative? The stakes, my friend, are sky-high.

It’s about embracing a vision of freedom, privacy, and human potential.

If we don’t seize this opportunity and support those who are legitimately building towards decentralised AI while it is still early, humanity’s future could look very bleak.

I’m doing my part.

The idea maze for AI startups

I stumbled upon an interesting tidbit from Chris Dixon’s blog, written way back in 2015, yet still incredibly relevant today. Dixon was prescient.

Essentially:

It’s easy to get an AI to ~80% accuracy; beyond that, it is diminishing returns.

So, AI founders should either:

  1. Build a product that only needs 80% accuracy, or

  2. Achieve 100% accuracy by narrowing the scope as much as possible and obtaining as much data as possible.

  3. For (2), you can obtain data by crowdsourcing, mining public sources or collecting it yourself directly.

Hope you enjoyed this midweek piece.

Cheers,

Teng Yan

Footnotes:

1 One sad example is Babylon Health, which heavily promoted its personal AI doctor. However, it was later revealed that their "AI doctor" was merely a set of rule-based algorithms operating on a spreadsheet and failed to perform as advertised. Billions of investment dollars were wiped out, and people were harmed.

2 Google’s Gemini faced severe backlash when it inaccurately generated images depicting historical figures in racially altered contexts (black ‘founding father’ and a black pope).

Reply

or to participate.