This content originally appeared on DEV Community and was authored by Sean McCurdy
I’ve been thinking a lot about org charts lately. Not because I love them (I don’t), but because I recently started at Lattice after spending years working on AI startups, and getting onboarded into the HR world has me reflecting on what the future of work might actually look like.
What’s struck me isn’t the technology itself, but how we’re still thinking about AI within the constraints of traditional organizational structures. We debate whether AI will replace people or empower them, but we’re not really asking how AI might impact organizational design itself – how it could reshape people’s roles within companies.
The “Aha” Moment That Started This Thinking
A few weeks ago, I was on a bike ride around the UW campus in Seattle with my partner when she made an observation that really stuck with me. She predicted that undergrad programs might evolve from rigid disciplines like neuroscience or engineering to broader inquiry-driven questions like “Why do we age?” or “What economic drivers predicted the fall of Rome?” More like how PhD programs work today.
That got me thinking: would that same idea apply to business organizations?
Right now, we’re all stuck trying to figure out how AI fits into our existing structures. But what if AI changes the entire way we organize work itself?
What I Think Is Coming: The Flattening
Here’s my core thesis, and I know it sounds pretty radical: AI coordination will enable organizations to flatten from hierarchical org charts into networks of specialized expertise nodes connected through AI systems.
Instead of the traditional CEO → VP → Director → IC pyramid, imagine organizations where specialized experts contribute their judgment directly to an AI coordination layer that synthesizes insights, identifies conflicts, and facilitates decision-making across the network.
When a strategic decision needs to be made – say, entering a new market – instead of information flowing up through management layers and decisions flowing back down, the AI system simultaneously queries the relevant expertise nodes. Marketing research for customer insights, engineering for technical feasibility, finance for resource implications, design for user experience considerations. The AI synthesizes perspectives, identifies areas of agreement and disagreement, and presents options with unprecedented clarity and speed.
It’s not about replacing human judgment. It’s about making human expertise actually accessible when decisions need to get made, while AI acts as a delegation engine positioning human-centered work at the core of company decisions.
Why I Think This Could Actually Work
The research backing this up is pretty compelling. Studies from MIT, McKinsey, and Harvard consistently show that cognitive diversity drives better decision-making. Teams with diverse perspectives outperform homogeneous teams by 35-70% on complex problem-solving tasks. Yet traditional hierarchies limit whose voices get heard. AI coordination could surface insights from any expertise node regardless of seniority.
There’s also Brooks’ Law, which demonstrates that adding people to complex projects often decreases productivity due to communication overhead. AI coordination could break this law by handling the communication complexity, allowing productivity to scale more linearly with expertise quality.
And here’s a stat that blew my mind: research shows managers spend 61% of their time just gathering and redistributing information. When AI can maintain context across hundreds of conversations and synthesize complex inputs in real-time, this coordination bottleneck disappears.
The examples of what’s already happening are pretty wild. Companies like Safe Superintelligence have 20 employees and a $32 billion valuation. Cursor hit $100M ARR with 20 people in 21 months. Lovable reached $17M ARR with 15 employees in just 3 months. These aren’t flukes – they’re early signals of what happens when you can coordinate human expertise without traditional management overhead.
What I’m Seeing in Practice Already
I’ve been watching this play out in engineering teams. Instead of engineers spending 37-44% of their time in coordination meetings, AI tools are starting to handle sprint planning, dependency tracking, and progress updates. Atlassian’s AI Work Breakdown automatically decomposes large epics into actionable issues in minutes rather than hours. GitHub Copilot provides context-aware code reviews. Engineers can focus entirely on architectural decisions and solving complex technical problems.
In product management, AI is continuously monitoring user feedback, market signals, and technical constraints. Tools like Monterey AI automatically triage support tickets and surveys into Linear issues, while Kraftful AI transforms customer calls into well-defined user stories. Product managers can spend their time on strategic vision and difficult trade-offs instead of being human information aggregators.
Customer success teams are seeing similar transformations. Intercom’s Fin AI Agent resolves up to 86% of routine support volume. Real-world deployments show 65% deflection rates, saving hundreds of support hours per month. Human agents only work on cases requiring genuine human judgment and creativity, leading to higher job satisfaction.
The Hard Questions I Keep Wrestling With
But here’s where it gets complicated, and these are the questions that keep me up at night.
Accountability becomes completely different. In traditional orgs, managers take responsibility for team outputs even when they lack domain expertise. In AI-coordinated networks, accountability becomes distributed and expertise-specific. The security engineer who flagged an unaddressed vulnerability, the UX researcher who identified user confusion, the market analyst who predicted competitive threats – they all have direct, traceable accountability for their specific contributions.
We’re dismantling career advancement as we know it. Traditional management ladders provide not just skill development, but identity, status, and financial progression. If those disappear, we need entirely new models. Maybe advancement means becoming a deeper expert in your domain. Maybe it means rotating through different expertise areas. Maybe learning to collaborate with AI coordination systems becomes a meta-skill.
The identity crisis is real. Many professionals derive significant meaning from “managing people” or “leading teams.” Flattened organizations need to provide alternative sources of professional identity and fulfillment through public recognition systems, teaching roles that carry high status, or ownership of strategic initiatives.
Who makes the final call? Without permanent decision-makers, organizations need dynamic authority structures. Context-sensitive leadership where different people lead different decisions based on expertise relevance. Escalation algorithms for when AI should elevate decisions to humans. Conflict resolution mechanisms for when expert opinions fundamentally disagree.
The Fork in the Road: Democracy vs. Authoritarianism
This is probably the most critical question, and honestly, it’s what I worry about most. AI-coordinated organizations could become the most democratic workplaces in history, or the most oppressive. It all depends on how we build them.
The democratic path requires transparency I’m talking about algorithm auditing, decision traceability, bias detection, and override mechanisms. We need constitutional frameworks for how AI systems should operate, rotating oversight, and ways for anyone to suggest changes to coordination algorithms.
The alternative is power concentration where someone controls the AI systems and everyone else becomes subject to algorithmic authority. Preventing this requires distributed control, transparent and auditable AI systems, and preventing any individual or small group from permanently controlling coordination systems.
How We’ll Know It’s Working
I think we’ll see organizations achieving 2-3x faster revenue growth per employee compared to traditional benchmarks, with profit margins expanding as coordination costs plummet.
The operational signals will be more immediate: 30-50% fewer status meetings, people spending 60%+ of their time on domain expertise instead of coordination overhead, faster decision cycles without quality degradation.
But the human signals matter most. Stress shifting from organizational politics to domain challenges. Recognition based on expertise impact rather than hierarchical position. People reporting higher job satisfaction because they’re focused on what they’re actually good at.
The Transition Will Be Disruptive (And That’s Okay)
Here’s the hard truth I keep coming back to: this transition will be disruptive. Many current management roles will become obsolete, not because AI replaces managers, but because the coordination functions they serve become unnecessary.
The organizations that figure this out first will have massive competitive advantages. But it’s not inevitable. Most organizations will choose the safer path of using AI to make existing hierarchies more efficient instead of restructuring around AI coordination.
Startups are in a different position. We have to find ways to compete with well-funded incumbents, so we might rely more heavily on flattening to scale our impact or stay nimble with small teams powered by heavy technology use.
What This Means for All of Us Building Things
The future of work isn’t about humans versus AI or even humans plus AI. It’s about humans and AI creating entirely new organizational forms that were impossible before now.
This requires being honest about what humans are uniquely good at, investing in developing those capabilities, and designing organizations that prioritize human agency rather than algorithmic efficiency.
The question isn’t whether AI will change how we work – it already has. The question is whether we’ll use it to create more human workplaces or less human ones.
That choice is still ours to make, but I think the window for thoughtful implementation is narrowing as early adopters gain competitive advantages.
What do you think?
As always, be brutal. Let’s start a conservation.
You can check out the full article here on substack.
This content originally appeared on DEV Community and was authored by Sean McCurdy