← Glossary
Trust cues
Trust cues are the small, deliberate signals that make a system feel understandable and controllable. They don’t “add trust” by branding; they reduce uncertainty by making risk, intent, and control legible.
Definition
- Trust cues are UI/UX elements (copy, affordances, states, provenance) that communicate: what is happening, why, what changes, and what the user can do next.
- They are strongest when they reflect real system properties: policies, permissions, audit trails, recoverability—not empty reassurance.
- In agentic systems, trust cues extend beyond screens into evidence surfaces and operator visibility.
- Think of them as legibility primitives: cues that compress complex system behavior into a few signals a human can trust.
Why it matters
- Trust is a performance characteristic. If users can’t predict outcomes, they either abandon the flow or over‑compensate with support requests.
- High‑stakes domains (identity, payments, regulated services) demand trust cues because the cost of error is real.
- With AI, trust cues prevent “black‑box anxiety”: users need to see boundaries, not just results.
Common failure modes
- Reassurance copy with no control (“Don’t worry, it’s safe”)—users feel manipulated.
- Hidden state: users don’t know whether something is pending, approved, or failed.
- No provenance: users can’t tell where a decision came from (data, policy, human).
- Over‑complex transparency: dumping logs in the UI instead of curating evidence.
- One-size-fits-all: the same cues for low‑risk and high‑risk actions.
How I design it
- Start from the risk model: what can go wrong, for whom, and how bad is it. Then design cues proportional to risk.
- Make status explicit: timestamps, next step, owner (system vs human), and escalation route.
- Surface intent and scope: what data is used, what permissions apply, what will change if the user proceeds.
- Offer reversible actions when possible; when not, clearly mark irreversibility and provide confirmation patterns.
- Use “why this” evidence surfaces for AI decisions: inputs, constraints, confidence signals, and appeal routes.
- Calibrate cues to context: novice vs expert, low-risk vs irreversible. Trust cues should reduce cognitive load, not increase it.
Related work
Proof map claims
Case studies
See also
Contact
Let’s discuss a leadership role, advisory work, or a complex product challenge.