- Chain of Thought
- Posts
- đź‘‘ Weekly AI Edge #11
đź‘‘ Weekly AI Edge #11
Akash CPUs in heavy demand. Space and Time raises $20M
GM! Welcome to Chain of Thought, the best research on Crypto AI.
In this edition of our weekly AI Edge, we cover:
Space and Time closes a $20M Series A funding round
Akash CPUs are in high demand
USDC, the stablecoin of choice for AI?
How we get to GPT-8
Now, you can access Chain of Thought directly on your iPhone’s home screen!
1. Open Safari and go to chainofthought.xyz
2. Tap the Share button (the square with an arrow pointing up) at the bottom of the screen.
3. Scroll down and tap Add to Home Screen.
We’re just a tap away, like your favourite app!
🦍 State of the Market..
Source: Coingecko
The overall Crypto x AI market cap remained relatively flat, rising a meagre 1.3% to $22.7B this week
All eyes were on NVIDIA’s Q2 earnings release. Despite beating estimates (EPS of 68 cents vs. 65 cents expected), the stock took a nearly 10% hit, reflecting the market’s sky-high expectations for the chipmaker.
In typical crypto fashion, NVDA’s post-earnings drop sparked a selloff in several crypto AI tokens. TAO, the AI sector’s bellwether, fell 15% this week.
Meanwhile, a few small-cap AI tokens saw a nice pump. NOS, a GPU marketplace on Solana, surged over 50% without any clear catalyst or news—maybe some insider action?
Price (7-day change) | FDV | |
---|---|---|
Bittensor TAO | $278.87 (-15.8%) | $5.85B |
Near Protocol NEAR | $4.27 (-5.2%) | $4.73B |
Akash AKT | $2.70 (+6.4%) | $670M |
Golem GLM | $0.28 (-11.0%) | $280M |
Nosana NOS | $1.99 (+54.0%) | $190M |
đź“Š Chart of the Week
Akash CPU Usage Returns to Highs
For the past six weeks, CPUs on Akash have been in high demand, with the number of CPUs leased in August nearing 8,000—almost back to the yearly highs. Suppliers are earning ~$0.8 and $1.6 per CPU thread per month.
Some of the standout builders on Akash include Venice.ai and Nous Research.
Nous Research recently made a breakthrough in distributed training using Akash. Similarly, NVIDIA acquired Brev, which was using Akash for compute. All in all, this year has been a whirlwind for Akash.
You can hear founder Greg Osuri talk about Akash’s growth here
🏆 Caught Our Eyes..
Source: Space and Time
Project Updates
Space and Time, an AI ecosystem project, closes a $20M Series A bringing total funding to $50M. Arkstream Capital explains why they invested.
Skyfire uses USDC as a stablecoin network to enable purchasing power for AI agents
Atoma Network releases its technical whitepaper
Mizu AI, a group building large open-source datasets, releases the Mizu bot, a telegram bot for mining rewards through data processing.
Nous Research released a new open dataset, Hermes Function Calling V1, which gave Hermes 2 Pro its tool use and output capabilities
RISC Zero is now fully open-source, releasing its source code and compiler tech to the public
Bittensor
Decentralised Compute
Hyperbolic makes the Llama 3.1-405B Base model available on their platform
Prime Intellect added full H100 nodes to their marketplace
Mira and Hyperbolic partner, with Mira leveraging Hyperbolic’s GPU marketplace
Aethir announces their 6-month roadmap, doubling down on Enterprise AI
Incentives and Launches
Sharpe AI had its TGE on August 27th. It’s currently trading 20% above its IDO price.
Arweave makes DAI available for AO token minting; rewards begin on September 4th at 1100 EDT
Nillion Network announces the Nillion Verifier Program, with rewards for early participants
Open Gradient announces its mission statement
🧠Here’s How We Get to GPT-8
Scaling laws in AI have sparked much debate.
In simple terms, scaling laws state that as you increase the model size, data, and compute power used to train an AI, its performance gets better. So far, this has held true.
But here’s the catch: there are concerns that we’re hitting the ceiling on available data, compute power, and even electricity to fuel the next generation of massive AI models.
Epoch AI’s analysis suggests otherwise: AI scaling can continue its current trend through 2030. This means that AI training runs could be 10,000 larger than GPT-4.
So we’re likely to see GPT-8 hit the market eventually. It’s starting to feel like the iPhone cycle—new version, better features, same hype.
Who knows? Maybe soon, we’ll be lining up for the latest GPT like we do for the newest iPhone.
🔥 On X..
Can your X account handle a Claude roast?
ok the last sentence got me
"hey claude, roast my twitter account with dripping sarcasm in one paragraph"
— sam mcallister (@sammcallister)
4:38 PM • Aug 25, 2024
AI hack: Use this prompt
Very powerful prompt: "Explain it with gradually increasing complexity." x.com/i/web/status/1…
— Rohan Paul (@rohanpaul_ai)
2:11 PM • Aug 27, 2024
How decentralised systems can reduce pre-training costs
Just published 🔥
Our research paper shows how decentralised systems can drive down the cost of pretraining while matching the state-of-the-art under certain constraints.
Check out the white paper at the following link, and follow the 🧵for context!
macrocosmos.ai/sn9/dashboard
— Macrocosmos (@MacrocosmosAI)
11:25 AM • Aug 26, 2024
Nous Research’s report on Distributed Training, gets 3K+ likes
What if you could use all the computing power in the world to train a shared, open source AI model?
Preliminary report: github.com/NousResearch/D…
Nous Research is proud to release a preliminary report on DisTrO (Distributed Training Over-the-Internet) a family of… x.com/i/web/status/1…
— Nous Research (@NousResearch)
5:25 PM • Aug 26, 2024
Spectral walks through their AI agent technology BROS
Let’s take a deep dive into BROS—the backbone of Syntax V2 that drives all your AI Agent's onchain actions! 🧠🧵~
As your Agent traverses the onchain world, Spectral's BROS (Backend Resource for Optimal Service) serves as the critical engine behind your Agent’s operations.… x.com/i/web/status/1…
— Spectral (@Spectral_Labs)
3:08 PM • Aug 28, 2024
Wei Dai on why prediction markets need zkTLS and zkLLM
1/ Why prediction markets need "zkTLS" and "zkLLM" 🧵
TL;DR: zkTLS + zkLLM can transform the resolution of a prediction market from an intersubjective task to an objective task.
Refer to the table below (from the Eigen whitepaper) for definition of terms.
— Wei Dai (@_weidai)
4:27 PM • Aug 29, 2024
Shayon discusses the path forward for DePIN
The current chapter for DePINs is about showing that the resources we've aggregated are economically productive. Thanks @EV3ventures for letting me walk through how to get there faster.
— shayon (@shayonsengupta)
4:44 PM • Aug 29, 2024
That’s it for this week! If you have specific feedback or anything interesting you’d like to share, please just reply to this email. We read everything.
Cheers,
Teng Yan & Joshua
Did you like this week's edition? |
This newsletter is intended solely for educational purposes and does not constitute financial advice. It is not an endorsement to buy or sell assets or make financial decisions. Always conduct your own research and exercise caution when making investment choices.
Reply