This content originally appeared on DEV Community and was authored by Mak Sò
This leaderboard isn’t a celebration. It’s a hallucination with a high score.
The First Time I Met AGI
I first heard the concept of AGI, real AGI, around 2015 or 2016.
Not in a blog post. Not in a product pitch. But through long hours of internal digging. Trying to grasp what “general intelligence” actually means in cognitive terms, not hype cycles or funding rounds.
Back then, it was hard to even wrap my head around it. It took me months to begin to understand what AGI implied: The scope. The risk. The ontological rupture it represents.
So I went deep. I co-founded Abzu with some truly brilliant people.
And I tried to follow the thread down: from models, to reasoners, to cognition itself.
And Now?
Now it’s 2025. And worse, nearing 2026. And we’re flooded with noise.
People who have never studied cognition, never touched recursive reasoning, never even defined what intelligence means are telling the world that:
“AGI is almost here.”
No. It’s not.
And worse — AGI isn’t even scoped yet.
Not correctly. Not rigorously.
We don’t even agree on what “general” means.
What AGI Actually Demands
Here’s what I’ve learned over nearly a decade, through ethology, architecture design, and cognitive experiments:
- Intelligence is not a monolith.
- It emerges from conflict. From separation of thought. From multiple perspectives that disagree, and then, sometimes, reconcile. There is no single model that can do that.
Why?
Because real intelligence isn’t just statistical next-token prediction. It’s contradiction, held in tension. It’s the interplay between memory and intuition. Between structured logic and emotional relevance.
Between what I know now and what I used to believe.
What That Leaderboard Gets Wrong
The ARC-AGI leaderboard shows dots climbing a curve.
Cost vs. performance.
Tokens in, answers out.
That’s fine, for task-solving. But AGI isn’t a task. AGI is a scope of adaptive cognition across unknown domains, with awareness of failure, abstraction, and reformation.
AGI needs to:
- Break itself apart
- Simulate internal dissent
- Reason in loops, not just in sequences
- Remember contradictions, not flatten them
- Develop subjective models of experience, not just text
None of that is visible in the chart. Because none of that is even attempted in most systems today.
The Real Danger
It’s not the models.
It’s the narrative.
Telling people AGI is near when the field hasn’t even defined what it is that’s not innovation. That’s cognitive malpractice. We’re building scaffolding over a void. And convincing the public that we’ve hit the summit when we haven’t even drawn the map.
What Needs to Change
We need to stop chasing scores and start building systems of cognition.
- Multi-agent reasoning
- Deliberation loops
- Memory with scoped decay and identity
- Contradiction-aware execution
- Traceable thought, not just output
That’s why I built OrKa. Not because I think I have the full answer.
But because I know for a fact that single-model intelligence will never be enough. If AGI ever emerges and that’s still an open question it won’t come from a bigger model. It’ll come from the orchestration of thought. From reasoning systems that can doubt themselves, disagree internally, and change their minds not just complete the sentence.
Final Word
To anyone who still believes AGI is a product you can wrap in a prompt:
Stop. To anyone who’s been told “we’re almost there”:
- Don’t listen to loud certainty.
- Listen to quiet contradiction.
- That’s where real intelligence starts.
And to the few of us who know the scope is still undefined:
Keep building. Keep doubting. Keep looping over your own beliefs.
That’s the only path that might, might, lead to something sensate.
This content originally appeared on DEV Community and was authored by Mak Sò