Skip to content

AI-generated performance reviews are detectable and corrosive — human judgment still matters

Insight: When both parties in a performance review use AI — the employee to draft their self-assessment, the manager to generate feedback — the process becomes a meaningless exchange of AI-generated text. The tell-tale signs are obvious: words like "pioneered" and "spearheaded," no personal detail, and a regurgitation of the original input on steroids. Without human context, AI sets unrealistic goals (suggesting a lone engineer "write a frontend strategy document for the company") and misses nuances (assigning leadership of projects that already have leads). "I'd rather read the prompt. And I'd take a single ounce of humanity over a full time AI manager."

Detail: This is a concrete example of the "commodification of knowledge work" playing out in real workplace processes. The author's key insight: if both sides are using AI, the honest move would be to just exchange bullet points. The seductive efficiency gain for managers with 15+ reviews to deliver masks a genuine loss — the employee receives no real feedback, no contextual suggestions, no evidence of being seen as an individual. This has direct implications for design teams where craft review and mentorship are essential for growth.

Sources

Related: existing entry "AI is commodifying knowledge work" in external/ai-assisted-design.md — CORROBORATES