This content originally appeared on DEV Community and was authored by Seth Keddy
The 90–9–1 Rule has long been used to describe the participation dynamics in internet communities. It suggests that within most online ecosystems:
- 90% of users are consumers. They observe, read, or watch, but don’t interact or contribute.
- 9% of users are editors. They interact with content, modify it, or engage with those who do.
- 1% of users are creators. They generate the majority of the original content that fuels the platform.
This pattern was observed in early web communities like Wikipedia, online forums, and comment sections — and it’s still visible today on platforms like Reddit, Stack Overflow, and GitHub.
But what happens when you apply this model to modern DevOps?
And more importantly — how does the rise of artificial intelligence, especially generative AI and autonomous agents, disrupt this model in both practice and legality?
Understanding the Rule Within DevOps
Modern DevOps pipelines are typically built and maintained by a small core group of engineers:
- Platform engineers
- Infrastructure specialists
- Senior SREs
These individuals create:
- Automation scripts
- CI/CD workflows
- Infrastructure-as-code modules
- Deployment architectures
Meanwhile, the majority of developers and operators are simply users of these systems. They consume the workflows, use prebuilt modules, or deploy via preconfigured pipelines — without deeply understanding or modifying them.
This maps perfectly to the 90–9–1 rule:
- 90% of engineers and stakeholders use DevOps tools and pipelines.
- 9% make contextual changes — modifying configs, tweaking scripts, customizing dashboards.
- 1% build the core frameworks, tooling, and reusable modules.
This inequality isn’t inherently bad — it’s efficient.
Most teams shouldn’t need to build their own deployment stacks from scratch. But that efficiency assumes a human-centric model. Enter AI.
When the 1% Become Machines
Now we have:
- GitHub Copilot
- Azure DevOps Copilot
- AWS CodeWhisperer
- CrewAI
- LangChain agents
- AutoGPT-style systems
These tools can:
- Write Dockerfiles, Kubernetes manifests, or Terraform modules
- Debug and modify CI/CD YAML configs
- Generate observability dashboards or runbooks
- Monitor performance and make deployment decisions autonomously
Suddenly, the 1% role — the creator — is increasingly filled by AI.
This raises big questions:
- If AI creates infrastructure or system code, does it count as a contributor?
- If that code causes an outage, vulnerability, or violation, who’s responsible?
This is no longer hypothetical. AI-generated code has real implications for:
- Security
- Compliance
- Legal liability
Legal and Ethical Implications of AI Participation
AI tools are not legally accountable, yet they’re shaping systems that must comply with regulations and avoid security pitfalls.
Consider these scenarios:
- An AI tool misconfigures an S3 bucket with public access → PHI is exposed → Who’s liable?
- An AI agent integrates a library with an unlicensed dependency → Your org is now violating copyright.
- A remediation agent applies a bad fix to prod → Outage cascades → Who’s at fault?
Historically, responsibility was human:
- A junior engineer could be mentored.
- A senior engineer could be held accountable.
- An open-source contributor could be contacted.
But AI isn’t accountable. It’s opaque, prolific, and outside traditional models of liability.
Implications for the 9% — The Editors and Reviewers
If AI becomes the 1%, the 9% — the editors — are now the last line of defense.
But here’s the catch: most teams don’t deeply review AI output. They assume Copilot and others produce “good enough” code.
In DevOps, “good enough” can mean disaster.
The solution?
Elevate the editorial role to include:
- AI-linting and validation in CI/CD
- License and security scanners for AI-generated code
- Mandatory human review of AI pull requests
- Audit logs of AI agent actions
- Policies for AI tool usage and oversight
This is more than a tech shift — it’s a cultural shift. Just as DevSecOps shifted security left, AI governance must shift left too.
Where Does the 90% Stand?
The 90% will continue to be consumers. But they’ll increasingly interact with AI-generated:
- Workflows
- Logs
- Alerts
- Dashboards
That means AI fluency is now essential — even for non-creators.
Not in terms of understanding LLM internals — but in:
- Recognizing risky output
- Knowing AI limitations
- Escalating when something seems off
Final Thoughts: Contribution Without Accountability Is Dangerous
The 90–9–1 rule still applies — but the roles are evolving.
- The 90% must learn to question AI output
- The 9% must become reviewers, auditors, and validators — not just modifiers
- The 1% may increasingly be machines — but humans are still responsible
AI accelerates — but it also amplifies risk.
And when the contributor isn’t human, oversight becomes non-negotiable.
If we don’t update our understanding of contribution and accountability in the AI age, we risk letting automation outpace responsibility.
And that’s a DevOps anti-pattern no pipeline can fix.
Discussion Questions
- Are AI tools contributing directly to your DevOps workflows? How do you manage their output?
- Have you implemented governance for AI-generated code or automation?
- Who reviews or approves AI decisions in your pipelines?
Let’s hear your thoughts in the comments.
We need to shape this now — before the legal and operational consequences force us to.
This content originally appeared on DEV Community and was authored by Seth Keddy