- Chain of Thought
- Posts
- Opinion: Decentralised AI will save humanity
Opinion: Decentralised AI will save humanity
The stakes are high & our window of opportunity is small
Marc Andreessen says AI will save the world.
Let me add to this: Decentralised AI will save humanity.
It may not be apparent now, but we will appreciate this soon.
AI is a centralising technology
Eric Wall did his research and calls OpenAI an “untrustworthy organization”:
i'm still mulling over the fact that the @OpenAI board fired @sama because he couldn't be trusted and the entire superalignment team quit over safety concerns
for months i've been wondering what actually happened and there's quite a lot of content out there for anyone who's… x.com/i/web/status/1…
— Eric Wall | BIP-420😺 (@ercwl)
11:28 PM • Jul 9, 2024
Let’s consider why we are in this state today.
The race to develop advanced intelligence is an arms race for compute power.
Access to top-tier GPUs is the biggest bottleneck in developing AI intelligence at any single organisation. As discussed in last week’s newsletter, building large foundational AI models isn’t as simple as renting a bunch of GPUs.
Often, you must construct your own data centres, which involve high-speed networking, customized data storage, stringent privacy considerations, and efficiency optimization. Cloud GPU rental solutions just can’t match these capabilities.
It’s no surprise, then, that tech giants like Microsoft, Google, and OpenAI are leading the charge. Smaller players lack the resources to compete at this level.
More worryingly, though, we’re entering a new paradigm where the performance gap between proprietary and open-source AI models is quickly widening.
In the pre-generative AI era, AI researchers frequently published their findings as academic papers, contributing to a collective pool of knowledge. There was little reason to keep discoveries under wraps.
Case in point: “Attention is all you need”—the landmark Transformer architecture paper that eventually led to chatGPT—was published openly in 2017 by Google scientists.
Today, frontier AI research is being conducted behind closed doors in the top AI labs, and important breakthroughs are kept secret. There are substantial commercial interests at stake, especially because they have investors who need to see a return on their capital invested.
The incentives have flipped.
A Dark Future
Generated by DALL-E
Imagine a world where AI is controlled by Big Tech. In this Orwellian dystopia:
AI will always remain a black box. This lack of transparency is alarming, especially because we will use these systems to make decisions that heavily impact our lives. Trustless verifiability is crucial in high-stakes fields like healthcare1 .
Our minds get manipulated. The entities that own AI will be tempted to use AI to serve their agendas. The potential for AI misuse in shaping public opinion, manipulating markets, or swaying political outcomes is very real2.
Censorship becomes the norm. Social media platforms like X and TikTok function more like editorialized content platforms, where (political) views can be emphasized or suppressed. But the open internet means anyone can still spin up a website and write whatever they want. It’s different when everything we use in the future is an AI app that is easily filtered.
We no longer own our data. Instead, we resign ourselves to the reality that our data is routinely harvested to feed large, centralized AI models without consent or fair compensation. Governments and those in power will go to great lengths to maintain their dominance, including invading our privacy.
Living in a world where our data and personal AI are not under our control is deeply unsettling.
If left unchecked, our society risks becoming overly dependent on a few powerful, monopolistic AI systems. We become mentally enslaved.
What’s The Alternative?
We need a counterbalance to the centralizing force of AI. We have a small window to shape the post-AI world we aspire to — one that is democratic, open, and fair.
Enter Crypto.
With crypto, we have a shot at upholding these key tenets:
Decentralized Control: Decision-making and control are distributed across a network, governed by code, removing power from any single entity.
User Empowerment: Users maintain ownership over their assets and data.
Censorship-Resistance: No one can wield the power to censor whatever they want.
Many argue that Crypto x AI startups are vaporware, scams, or lack real use cases, and only introduce additional friction points.
Some of this criticism holds water.
But let me ask you this: What is the alternative? The stakes, my friend, are sky-high.
It’s about embracing a vision of freedom, privacy, and human potential.
If we don’t seize this opportunity and support those who are legitimately building towards decentralised AI while it is still early, humanity’s future could look very bleak.
I’m doing my part.
The idea maze for AI startups
I stumbled upon an interesting tidbit from Chris Dixon’s blog, written way back in 2015, yet still incredibly relevant today. Dixon was prescient.
Essentially:
It’s easy to get an AI to ~80% accuracy; beyond that, it is diminishing returns.
So, AI founders should either:
Build a product that only needs 80% accuracy, or
Achieve 100% accuracy by narrowing the scope as much as possible and obtaining as much data as possible.
For (2), you can obtain data by crowdsourcing, mining public sources or collecting it yourself directly.
Hope you enjoyed this midweek piece.
Cheers,
Teng Yan
Footnotes:
1 One sad example is Babylon Health, which heavily promoted its personal AI doctor. However, it was later revealed that their "AI doctor" was merely a set of rule-based algorithms operating on a spreadsheet and failed to perform as advertised. Billions of investment dollars were wiped out, and people were harmed.
2 Google’s Gemini faced severe backlash when it inaccurately generated images depicting historical figures in racially altered contexts (black ‘founding father’ and a black pope).
Reply