Skip to content

AI coding techniques vary widely in maturity and effectiveness

Insight: A systematic assessment of AI coding techniques by maturity and effectiveness: Prompt engineering (Mature/Effective), RAG (Mature/Effective but limited for large codebases), Context engineering via CLI tools like grep and git (Emerging/Effective — proven by Claude Code's adoption), Rules/AGENT.md (Emerging/Limited — models can ignore rules), Tool calls (Mature/Effective — foundational), MCP servers (Emerging/Limited — tool explosion problem), AST/Codemap (Emerging/Effective).

Detail: The Rules/AGENT.md assessment is notable: despite widespread adoption, models sometimes ignore rules entirely. Rules should be treated as guidelines, not guarantees. The MCP assessment highlights a specific technical problem: if an MCP server has 20 tools, all 20 definitions are stuffed into the context window. With multiple servers this balloons to 100+ tools, degrading model performance by reducing signal-to-noise ratio and increasing costs. Zhu frames context engineering as "an alternative form of RAG" that uses tools to search the codebase rather than embeddings — mirroring how human engineers actually navigate code. Claude Code popularized this approach.

Sources

Related: existing entry "Context engineering supersedes prompt engineering" in batch-1/claude-code.md — COMPLEMENTS