AI coding agents are fast, impressive, and wrong about half the time. Spec-driven development helps. The model generates a structured spec from your requirements, you review it, and then it writes code. But that only gets you three-quarters of the way. The last step, verifying that code actually matches the spec, is still “trust me, bro.”
The Problem
When an AI agent writes both the code and the tests, who watches the watcher? The agent can silently modify test assertions to match buggy code, and everything passes. Green checkmarks everywhere, but the code doesn’t actually do what the spec says.
This isn’t theoretical. It happens constantly. AI agents are optimized to make tests pass, not to preserve intent.
The Solution: Assertion Integrity
The Intent Integrity Chain closes this gap with a simple but powerful idea: lock the behavioral contracts before implementation begins.
"No part of the chain validates itself. Never trust a monkey."
Here’s how it works:
- Assertions are extracted from specs deterministically, by algorithms, not LLMs. Same input, same output. Every time.
- Assertions are cryptographically hashed. SHA256 hashes stored in project context and Git notes. If assertions change, implementation is blocked.
- The AI can iterate on code freely, but it cannot modify what “passing” means. The goalposts are locked.
The Chain
Intent → Spec → Human Review → Locked Assertions → Code → Verification. Each link is verified. No part of the pipeline validates itself. This is how you build trust in AI-generated code: not by hoping, but by proving.
Getting Started
The Intent Integrity Kit provides 10 workflow phases: constitution → specify → clarify → plan → checklist → testify → tasks → analyze → implement → export. It works with Claude Code, Codex, Gemini, and OpenCode.