Alessandro L. Piana Bianco
Strategic Innovation & Design — EU / MENA

VP Design — AI & Trust

If you’re hiring a design leader for AI-enabled, regulated or trust-critical products, here is the evidence: governance, decision integrity, risk-aware delivery, and measurable adoption.

Claims

What you can expect

  • Trust is designed as a system — states, controls, recoverability, and auditability — not cosmetic reassurance.
  • AI governance becomes usable — human-in-the-loop, autonomy gradients, explanation cues, and incident-ready flows.
  • Decision velocity without quality loss — clear decision rights, critique rituals, and lightweight quality gates.
  • Risk-aware UX in regulated contexts — privacy/security constraints translated into shippable patterns and states.
  • Scale coherence across teams — pattern governance that prevents drift across squads and vendors.
Evidence

Evidence links

Proof queries

Jump into /proof/

What I do

What I do in the first 30 / 60 / 90 days

30 days
  • Map decision rights for AI-enabled UX, including risk acceptance and escalation paths.
  • Audit critical journeys for state coverage: failure, recovery, overrides, and evidence.
  • Define trust metrics: drop-offs, dispute/support debt, incident classes, and leading indicators.
60 days
  • Ship first improvements in the highest-risk flows (state-first), with measurable outcomes.
  • Implement quality gates for AI features (explainability, override, audit trail UX).
  • Establish pattern ownership and exception rules to reduce fragmentation.
90 days
  • Scale the governance loop across teams (cadence, decision artefacts, pattern evolution rules).
  • Stabilize incident-ready operating model: monitoring signals + response playbooks.
  • Institutionalize trust as a product KPI, not a post-launch concern.

Selected case studies

Case study

Intesa Sanpaolo — ML-based requalification & training dashboard (internal tool)

Focus: AI/ML decision support UX + privacy/access control + dashboard usability and trust

  • Designed interpretability and trust cues around ML recommendations without turning the tool into a black box.
  • Defined role-based views, access control expectations, and audit-friendly states for operational use.
  • Delivered a usable interaction model that supports responsible adoption under real constraints.
Case study

Eurobank — Digital banking redesign under Greek capital controls

Focus: regulated constraints + edge-case libraries + recoverability patterns

  • Designed constraint messaging and exception states where clarity directly impacts trust.
  • Built edge-case libraries and reusable transactional flow patterns adopted across segments.
  • Supported release under strict constraints without fragmentation of patterns and terminology.
Case study

Cross-border wallet & transactional products — KSA + EU requirements

Focus: multi-market definition + cross-platform integration + privacy/security/accessibility constraints

  • Defined a common core + local layers model to scale across markets without leaking complexity.
  • Mapped integrations and dependencies early into delivery-ready epics and edge cases.
  • Established trust patterns for regulated transactions: states, confirmations, consent, and controls.

Contact

If you’re hiring for this role, send the brief, constraints, and what “good” looks like in your org.