Pillar 4: The AI as Collaborator
Treat your AI assistant like a junior developer who needs onboarding.
This mental model changes everything about how you work. A junior developer knows how to code but nothing about your project, your business logic, your architecture decisions, or your team’s conventions. You wouldn’t hand them a ticket and walk away. You would explain the codebase, share the coding standards, point them to docs, and review their pull requests carefully.
Your AI assistant operates under the same constraints. It can write code, but it cannot browse through your codebase and intuit the business logic. It cannot figure out unstated requirements by reading implementation details. It cannot understand the “why” behind technical decisions without explanation. Your job is to be the senior developer who provides that context and reviews the output.
What We Expect
Section titled “What We Expect”You onboard your AI the way you would onboard a person. This means providing: an explanation of what the project does and why, the technology stack and architectural patterns, coding standards and conventions, examples of how similar problems have been solved, and clear success criteria for the work. Most of this lives in your rules file and project documentation. See Pillar 1: Context Engineering for how to structure that onboarding.
You review AI output like a senior reviewing a junior’s pull request. Every piece of AI-generated code gets human review. Focus on: whether the business logic is correctly implemented, whether the approach matches your architecture, whether edge cases are handled, whether the code is actually doing what it claims. Look for hallucinated features (functionality that wasn’t requested and doesn’t belong).
You work iteratively in small steps. You would not want a junior developer to go off for a week and come back with a massive PR. The same applies to AI. Work in small, focused increments. One feature per interaction. Check in along the way. This is faster in the long run than unwinding a large batch of changes that went sideways.
You accept that AI is neither amazing nor awful; it is contextual. Prototyping a new product requires completely different patterns than modifying an existing enterprise codebase. The quality of output depends on the quality of your preparation. Research from METR found that experienced developers using AI tools without deliberate preparation actually worked slower, despite believing they were faster. The quality of your context, planning, and prompting determines whether AI accelerates or decelerates your work.
You own the code, regardless of who (or what) wrote it. Engineering culture has long tied ownership to authorship, but with AI-generated code the contract is implicit and probabilistic: you do not know what assumptions the model made or what edge cases it missed.
Mozilla’s engineering team frames the core tension well: AI can produce code much faster than humans can reason about it, and generating 5,000 lines in an hour does not mean you understand those 5,000 lines. The principle is simple: you generate it, you own it. Regardless of how code is produced, the developer who submits it bears full responsibility for its functionality, security, and maintenance.
For team-level clarity, consider declaring modules as either AI-owned (with full guardrails in place) or human-owned, so the ownership expectations are explicit.
Anti-patterns
Section titled “Anti-patterns”- Accepting AI-generated code without understanding what it does (“it compiles so it must be fine”)
- Giving the AI too much rope: attempting large features in a single shot without checkpoints
- Blaming the AI for bad output when the real problem is insufficient context or unclear requirements
- Treating AI as a magic box instead of a collaborator that needs guidance
- Using “the AI suggested it” as justification for a technical decision without understanding the trade-offs
Resources
Section titled “Resources”- Mozilla AI: Owning Code in the Age of AI - The tension between AI generation speed and human comprehension
- AmazingCTO: You Generate It, You Own It - Framework for AI-owned vs. human-owned code modules
- Google Engineering Practices: Code Review - How to review code effectively, applicable to AI-generated output
- METR: AI Impact on Developer Productivity - 2025 randomized controlled trial showing preparation quality determines AI productivity impact
- Pillar 1: Context Engineering - How to provide the context your AI collaborator needs
- Pillar 2: Planning Before Code - How to set your collaborator up for success
- See Learning Paths for deeper dives