A forkable set of norms for software development teams using AI tools. Fork it, adapt it, make it yours.
LLMs make it easy for anyone on the team to prompt a feature into existence. Left unchecked, it will result in feature creep, where the product becomes increasingly complex, bloated, and cluttered.
"[In the age of AI] we have to ruthlessly say no to things that seem legitimately neat but aren't core." – Michael Feldstein
Here's how we decide "what gets built" on our team.
Overall direction is set by:
Name: company strategyName: product strategyName: engineering and architecture
It's their job to maintain a clear product roadmap, focused on our most critical milestones.
Don't make decisions about "what to build" on your own. Reach out to Name if you have suggestions or need guidance.
[is this section needed - something like " Every addition has to be maintained, documented, tested, and supported. Before an idea becomes a ticket, someone should be able to clearly articulate the user problem it solves — not just that it's neat, or that it was easy to prototype."]
"All code must have a human owner who will take responsibility for it. It’s their code, just as if they’d written it in an integrated development environment; they just happened to use a different tool. If all generated code must ultimately be owned and reviewed by a human, that person is able to tune the results for safety, efficiency, and quality." – Ben Werdmuller
Every Pull Request has a human owner and a human reviewer.
Peer review is not optional. Automated testing must gate all production deployments.
"LLMs keep pulling us to ship the next feature but there's 100x more value in fixing what we have improving our process of how we build things." – Dax Raad
The increased velocity AI creates can inflate workloads and lead to burnout. Set limits. Take breaks. Slow down. Think deeply.
Our team primarily uses Claude Code.
For spec-drive development we use OpenSpec / Get Shit Done /
-
Plan (spec-driven development) → Ask AI for design options → Ask AI to propose architecture or tasks
-
Implement the plan → Have AI start to implement the plan in iterations (build this, then I'll review) → Make commits throughout to save progress → Document
-
Test → Generate tests using
this tool -
Generate PR → Run lint/tests/security checks
-
PR review → Human review required even for AI code
- Dax Raad (@thdxr), of OpenCode, whose post inspired the creation of this document.
- Ben Werdmuller for his article Good Vibes, Bad Vendors, which articulated the skills shift, the importance of engineers being central to the process, and the risks of burnout, security, and quality degradation.
- Michael Timbs's article Code Quality in the Age of Coding Agents is a good reference.
- Addy Osmani (Google Chrome team engineering lead) wrote up his team's specific workflow: LLM coding workflow going into 2026