AGI and its scarier cousin ASI get thrown around like they’re waiting just over the horizon. Maybe they are, maybe they aren’t. The truth is, with the way we’re building LLMs right now, we’re running into a wall. Training data is plateauing. Gains from brute force scaling are already showing diminishing returns.
Take GPT-5. On paper it’s the next leap forward. In practice, it’s not a revolution. The most interesting feature is an auto-router, the system decides whether your query deserves the expensive, long form reasoning or the cheaper, quick fire response. Clever, sure, but that’s not a new level of intelligence. It’s a cost saving mechanism dressed up as progress.
That’s not to say breakthroughs won’t happen. There’s too much money, talent, and energy in the field to bet against novelty forever. We’ve got living human cells in boxes (wetware, yuck), large concept models, world models, pieces of machinery that could click together in unexpected ways. If AGI arrives, it’ll likely be a Frankenstein’s monster of many approaches rather than one elegant breakthrough.
But here’s the thing, chasing AGI is a distraction for most of us. It’s an intellectual sideshow compared to the messy, urgent reality of what AI is already doing today. Algorithms that amplify misinformation. Agents that can buy things on your behalf. Models that transform industries before regulators even wake up. Tools that can solve real problems while creating new ones just as fast.
Whether AGI comes in five years, fifty, or never, we can’t let the thought experiment blind us. The real work is fixing the harms, scaling the benefits, and steering this technology while it’s still in our hands.
