Loading...
Loading...
A design pattern where human oversight is included in AI decision processes. COA critiques superficial human-in-the-loop implementations that provide the appearance of control without substantive intervention capacity. The presence of a human doesn't guarantee retained authority.
The system includes human review as required. But the human reviews 300 cases per day with no alternative analysis, no override authority, and metrics that penalise disagreement. The human is 'in the loop' but the loop is performative, not authoritative.
Section 6.1: Institutionalised Automation Bias