AGI Forecasts: Crystal Balls, Moving Goalposts, and the AI Experts Who Love Them
This post slices through the AGI hype—from doomsday crystal ball predictions to skeptical eye-rolls—with snarky insights into tech’s most audacious forecasts.


In the ever-evolving world of artificial intelligence, one topic continues to dominate cocktail party conversations (at least among the tech-obsessed): When will we achieve AGI—Artificial General Intelligence? You know, the moment when machines can outperform humans at virtually any cognitive task. If you've been following the AI space lately, you might have noticed that predicting AGI's arrival has become a bit like forecasting the weather in London—everyone has an opinion, and they're all confidently stated despite wide disagreement.
The Optimists: "AGI Is Just Around the Corner!"
Leading the optimistic charge is Dario Amodei, CEO of Anthropic. At the 2025 World Economic Forum in Davos, Amodei made some truly jaw-dropping claims: AI could compress a century of biological progress into just 5-10 years and potentially double human lifespans within the next five years. If that weren't bold enough, he's predicting full AGI by 2026 or 2027—systems that are "broadly better than all humans at almost all things".
Amodei isn't alone in this optimistic camp. Chetan Puttagunta, a tech investor and managing partner at Benchmark, has gone even further, suggesting "we're at some form of AGI already or very very close to it". Meanwhile, OpenAI's Sam Altman has reportedly predicted AGI could arrive as soon as this year, 2025.
It's enough to make you wonder: Should we be updating our resumes to include "Experience working alongside superintelligent AI overlords"?
The Skeptics: "Not So Fast, Silicon Valley"
On the other side of this debate stands Yann LeCun, Meta's Chief AI Scientist, who has become something of a one-man reality check for the AGI hype machine. In a panel discussion at Johns Hopkins Bloomberg Center, LeCun delivered what could only be called a robust critique of the current AGI optimism.
LeCun argues that Large Language Models (LLMs) alone cannot achieve AGI, pointing out they lack essential components such as sensory learning and emotional capabilities. He even admits to hating the term "AGI" itself, noting that "human intelligence is not general at all".
"They can't really reason. They can't plan anything other than things they've been trained on. So they're not a road towards what people call 'AGI,'" LeCun explained in a TIME interview. His assessment? True AGI is still several years away, despite the breathless predictions from others in the field.
The Shifting Goalposts Phenomenon
One particularly telling aspect of the AGI prediction game is how often the timelines shift. Critics have noted that Amodei has "quietly slipped his predictions back," now talking about 2026-2027 rather than 2025-2026. This is reminiscent of what some call the "Elon Musk approach" to technology forecasting—always promising revolutionary breakthroughs just a few years away, year after year.
Adam Carrigan, co-founder of MindsDB, offers perhaps the most balanced prediction: "In 2025, we will hear companies and individuals claiming to have reached AGI, but these claims will come with significant asterisks". That's a diplomatic way of saying "prepare for a lot of hype and fine print."
What's Actually Happening in AI Today
While the AGI debate rages on, actual AI development is taking more concrete, if less flashy, directions. Industry experts predict 2025 will be the year of:
AI Teammates and Agents: AI systems that collaborate with humans across business functions
Multimodal AI: Systems that can process multiple types of data (text, images, audio) simultaneously
Long-running agent loops: Agents operating in the background to perform various tasks beyond simple chat interfaces
Enterprise AI adoption: Accelerated use of AI in businesses to enhance customer experiences and operations
Agentic AI: Programs that collaborate to do real work instead of just generating content
These developments are impressive and transformative—but they're not quite the science fiction level superintelligence that AGI promises.
Why Do These Predictions Matter?
You might wonder why we should care about these competing AGI timelines. Beyond the obvious philosophical and existential implications, there are practical reasons to pay attention.
For one, these predictions influence massive investment decisions. When Anthropic's CEO suggests AGI is just around the corner, it's worth noting that this might help drive up his company's valuation. The promise of imminent AGI attracts billions in funding and shapes research priorities across the tech sector.
For those of us not running AI labs, these predictions shape public policy discussions, educational curricula, and career planning. If you believe AGI is coming in two years, you might make very different life choices than if you think it's twenty years away.
Finding the Signal in the Noise
So where does this leave us, the innocent bystanders watching tech titans debate the timeline of our potential technological singularity?
Perhaps Santiago Valdarrama, a computer scientist and tech influencer, offers the most grounded perspective. He focuses on what actually matters: reaching a point "where models can generalize and become proficient at solving tasks they weren't trained to solve." His assessment? "While we are making progress, I still think we are several years away from getting to that point".
The next time you hear a breathless prediction about AGI's imminent arrival, remember that we've been here before. AI has a long history of cycles of hype followed by "AI winters" of disappointment. What makes today different is that we're seeing genuine, impressive progress—just not necessarily on the timeline the optimists are selling.
In the meantime, I'll be here, still explaining to my smart speaker that no, I didn't ask for "musics about trains" when I requested the weather forecast. AGI may be coming, but for now, we're still firmly in the realm of Artificial Specific Intelligence—impressive in narrow domains but nowhere near the general problem-solving capabilities of humans.
And perhaps that's okay. After all, if the machines become too intelligent too quickly, who will be left to write slightly snarky blog posts about technology predictions?
Conclusion
Whether AGI arrives in 2025, 2027, or 2047, one thing is certain: the journey toward more capable AI systems continues to transform our world in ways both expected and surprising. While Amodei dreams of AI-extended lifespans and LeCun cautions against oversimplifying the challenge, the rest of us can enjoy the impressive, if imperfect, AI tools already enhancing our lives.
Just don't bet your retirement fund on the exact date of AGI's arrival. Even the experts can't agree, and they're supposedly the smart ones.