The Psychology of AI Explanations: How Much Detail is Too Much?
Article 4: The Psychology of AI Explanations: How Much Detail is Too Much?
One of the biggest paradoxes of explainability is this: users often say they want more transparency, but in practice, too much detail erodes trust.
Cognitive load
Psychology gives us a tool here: cognitive load, or the mental effort required to process information.
-
Too little explanation feels dismissive: “Because the model said so.”
-
Too much explanation causes fatigue: long technical justifications confuse or alienate.
-
Just right explanations—the “Goldilocks zone”—help users without overwhelming them.
Progressive disclosure
UX (User Experience) designers use the principle of progressive disclosure: give users a simple answer first, then let them drill deeper if they wish. Explanations could work the same way:
-
Layer 1: Simple rationale (“The system recommended this route because it’s faster.”)
-
Layer 2: Supporting evidence (traffic data, accident reports).
-
Layer 3: Technical detail (routing algorithm, probability models).
Case study: Google Maps
When you ask Google Maps why it chose a route, it doesn’t show you the algorithm. It shows traffic colors, construction icons, and time savings. Users can dig deeper into alternate routes if they choose. This layered approach is cognitive load-aware.
Case study: Healthcare AI
Imagine a patient portal that says: “The AI flagged this mole as high-risk.” A layered explanation could give:
-
Simple answer: “Because of its size, shape, and color pattern.”
-
Supporting evidence: “Compared to a dataset of 50,000 moles, it matches malignant features.”
-
Technical detail (for the doctor): “The CNN model weighted asymmetry 0.65, border irregularity 0.54, color variation 0.72.”
Takeaway
Explanations should be thought of as dialogues, not statements. Users should be able to ask, “Why?” and then “Why again?”—without being forced into too much or too little detail.
👉 Reflection: How might we design AI explanations as conversations—layered, adaptive, and interactive—rather than static disclosures?
📌 Final Note: Together, these four articles argue that XAI is not just about making algorithms transparent. It’s about making AI human-centered—designed for the needs, psychology, and contexts of real people.
Comments
Post a Comment