- Chain of Thought
- Posts
- š Weekly AI Edge #13
š Weekly AI Edge #13
Pin AI raises $10M in funding. Open AI releases a Strawberry!?
GM! Are you ready for the craziness of another Token 2049 week? We sure are not. š
Btw if youāre still deciding what to attend, Brody curated a great list of AI/DePIN events for next week.
Weāll be at several of those events, so come say Hi!
In this edition of our weekly AI Edge, we cover:
CUDOS poised to join the Artificial Superintelligence Alliance, price drops 10%
OpenAI releases aā¦ Strawberry? š
Pin AI raises $10M in pre-seed funding for personal AI
Our favourite tweets on X
š¦ State of the Market..
Source: Coingecko
The overall Crypto AI market has risen 8.6% from $21B to $23.3B this week.
The AI agents subcategory (Coingecko) saw an impressive 20% jump, driven largely by gains in FET and AGIX.
A big part of the buzz comes from the Artificial Superintelligence Alliance (ASI), with news that CUDOS may soon join.
This news has pushed up token prices for ASI, AGIX, and OCEAN by at least 23%, while CUDOS has dropped about 10% since last week. It seems ASI is quickly becoming an AI conglomerate, absorbing any AI-related project it needs
Price (7-day Change) | FDV | |
---|---|---|
Bittensor TAO | $284.69 (17.3%) | $5.9B |
Near NEAR | $4.19 (10.6%) | $4.9B |
Artificial Superintelligence Alliance ASI | $1.41 (29.4%) | $3.7B |
Cudos CUDOS | $0.008 (-9.4%) | $74M |
š Chart of the Week
Hermes 3 Nears 1B Tokens Processed Daily
Recent activity on Nous Researchās flagship Hermes 3 405B model has reached almost 1B tokens processed daily on OpenRouter. This suggests that the demand for open-source models is significant and steadily growing.
Theyāve shown significant performance in alignment with the user, multi-turn conversations, RAG, and other features.
FYI: We released a report on Nous Research earlier this week. Fresh from the oven.
š Open AI Launches a Strawberry
Source: @DrJimFan
OpenAI just released its latest model, o1 (codename: Strawberry), and the biggest shift? It takes its time to think.
This makes it particularly suited for more complex, planning-heavy tasksāthink solving crossword puzzles or tackling problems that require deep reasoning, like a PhD student in physics or biology might.
Youāll notice itās slower, taking more time to generate responses. Weāre seeing the concept of inference-time scaling finally being put into production.
Inference-time scaling is the new buzzword in AI.
The idea is to improve AI performance by increasing the compute during inference rather than constantly training larger and larger models.
If this picks up ā expect the demand for GPUs to ramp up massively soon.
š Caught Our Eyes..
Source: Pin AI
Project Updates
PIN AI raises $10M in pre seed funding to launch the worldās first open platform for personal AI. Notable investors like a16z, Hack VC, and angels like Illia of NEAR participated
Flock.io, the decentralised ML data network and recipient of the Ethereum Foundation Research Grant, releases their whitepaper
Peaq Network announces that they are launching this month
Mentals AI is garnering attention with their markdown AI agents
Hyperbolic partners with Black Forest Labs to bring FLUX text-to-image to Hyperbolicās AI cloud
Parallel TCGās cofounder Kalos summarizes all of the recent updates
Topology Ventures, an AI native venture firm, is hiring a technical investor (great opportunity imho)
Incentives / Rewards Programs
Almanak, a project building AI agents for DeFi, launches their alpha testing and points program
Ora Protocolās points program went live on September 11th. They also released Tora, a node program that anyone can run to secure Ora network and earn points
Sapien, a data labelling platform, is running an incentivized alpha where users can earn points
Privasea allocates 10% of tokens for their airdrop; caveat is that you have to download their mobile app and verify yourself
Arweave / AO
Arweave / AO weekly highlights summarized by Kyle_13
AO crosses $50M in DAI deposits
Meka City, built on Reality Protocol from AO, goes live. Meka City is an NFT gated game built on AO
Bittensor
Minerās Union receives 113K TAO ($29.4M) from the Bittensor foundation for validation on various subnets
Bittensor releases Child Hotkeys, creating more decentralization and the ability to delegate from one hotkey to many
Omega Labs, subnet 24 on Bittensor, is processing Any-to-Any for text, audio, and video in the same model. Their new Focus App incentives users to contribute for better training
Ventura Labs writes about Bittensor Subnet 38, specially built for distributed training
Fiber, a newly released protocol, is purpose-built to shortcut the process of building Bittensor subnets
Macrocosmos releases a full testnet for OpenMM, a key step towards releasing their flagship TaoFold
š“ A Reflection on Reflection 70B
Source: Matt Shumer
On September 6th, Matt Shumer, CEO of Otherside AI, unveiled Reflection 70B, claiming it was competitive with top closed-source models like Claude 3.5 Sonnet and even outperformed them on several benchmarks.
The secret behind this impressive performance was āReflection tuning,ā a technique that allows LLMs to iterate over their responses and correct mistakes before delivering them to the user.
Excitement was highāuntil some users began questioning its legitimacy. Skepticism grew over its near-perfect GSM8K score (99%!) and suspicions that the private API Shumer provider might just be a wrapper for Claude 3.5.
Speculation surfaced that Shumer was using the hype around Reflection 70B to boost the value of his investments.
The backlash from the community was swift, leading to Reflection 70B being removed from OpenRouter and Shumer issuing an apology.
Some learning points:
Donāt believe everything you see!
Simple āhacksā like Reflection turning will probably not lead to outsized performance gains.
This whole episode highlights the increasing demand for accountability and transparency in AI. Verifiable inference FTW.
Reflecting on Reflection 70B: Was It worth the Hype?
On September 6th, 2024, Matt Shumer introduced Reflection 70B, a model he claimed to be the worldās top open-source large language model (LLM), outperforming leading models such as Claude 3.5 Sonnet, GPT-4o, and Llama 3.1 405Bā¦ x.com/i/web/status/1ā¦
ā Hyperbolic (@hyperbolic_labs)
12:13 AM ā¢ Sep 11, 2024
š„ On X..
Apple Intelligence for iPhone 16. Will you make the switch?
Apple just announced its Apple Intelligence features for iPhone 16.
The 8 most impressive demos:
1. Apple Intelligence accessing the iPhone's camera for 'Visual Intelligence' on any surroundings
ā Rowan Cheung (@rowancheung)
8:14 PM ā¢ Sep 9, 2024
Nillion Cofounder on Privacy Enhancing Technologies in Web 3
I was asked the other day why I co-founded the @nillionnetwork
Part of the answer is in one of our early slides:
Web 1 was mostly public.
Privacy became a core enabling factor that drove web 2ās growth into a multi-trillion-dollar market. Almost no web 2 application would beā¦ x.com/i/web/status/1ā¦
ā Miguel de Vega (@miguel_de_vega)
7:44 AM ā¢ Sep 8, 2024
Alex Wacy writes about Compute Labs
Imagine the market today without the AI x DePIN industry - impossible, right?
But as a leading force, few projects break through to become unicorns.
Looking for a future unicorn?
My pick is Compute Labsš§µā¬ļø
ā AlĪx Wacy š (@wacy_time1)
9:42 AM ā¢ Sep 10, 2024
LLMs will always hallucinate.
"LLMs Will Always Hallucinate, and We Need to Live With This" - š¤š¤
Key points from the paper. š
š§ Hallucinations in LLMs not just mistakes, but inherent property. Arise from undecidable problems in training and usage process. Can't be fully eliminated through architecturalā¦ x.com/i/web/status/1ā¦
ā Rohan Paul (@rohanpaul_ai)
12:02 AM ā¢ Sep 12, 2024
Brendan Farmer on when compute needs to be verified
It's worth asking when computation needs to be verified.
- When the transfer of value depends on the result of a computation.
It's not realistic to use verifiable compute (either ZK, fraud proofs, cryptoeconomic security, etc) for all computations imo. If there is a problemā¦ x.com/i/web/status/1ā¦
ā Brendan Farmer (@_bfarmer)
2:24 PM ā¢ Sep 11, 2024
Jasper Zhang of Hyperbolic breaks down AI Benchmarks - GSM8K
Understanding AI Benchmarks - GSM8K
@mattshumer_ās Reflection Llama model was released two days ago, achieving higher metrics on several popular benchmarks compared to GPT-4o, Claude 3.5 Sonnet, and Llama 3.1 405B. Notably, the post claimed the model reached a score of 99.2% onā¦ x.com/i/web/status/1ā¦
ā Jasper š¤šŖļø (@zjasper666)
3:18 PM ā¢ Sep 8, 2024
Ben Fielding on Human and Machine knowledge curation
š§µ Human <> machine knowledge curation
Machine learning provides us with a vastly improved way of storing and interacting with knowledge in a digital form.
To do this, we need new basic operations and those new basic operations need machines to perform them.
ā Ben Fielding (@fenbielding)
7:00 PM ā¢ Sep 9, 2024
Thatās it for this week! If you have specific feedback or anything interesting youād like to share, please just reply to this email. We read everything.
Cheers,
Teng Yan & Joshua
Did you like this week's edition? |
This newsletter is intended solely for educational purposes and does not constitute financial advice. It is not an endorsement to buy or sell assets or make financial decisions. Always conduct your own research and exercise caution when making invest
Reply