Structured context engineering: YAML and Markdown outperform JSON for LLM context¶
Insight: A systematic study of 9,649 experiments across 11 models and 4 formats (YAML, Markdown, JSON, TOON) found that frontier models (Opus 4.5, GPT-5.2, Gemini 2.5 Pro) benefit significantly from filesystem-based context retrieval, while open-source models show less convincing results. A "grep tax" was observed: Token-Oriented Object Notation (TOON), designed for minimal tokens, actually caused models to spend more tokens iterating due to format unfamiliarity.
Sources
Related: existing entry "Context engineering for AI-assisted development" in external/claude-code.md — CORROBORATES