Skip to Content

THE TRUST ISSUE  

April 28, 2026
Johan Leidefors

Studies show AI technology is advancing, but human trust stays stagnant 

Questions about AI are often treated as technical. Data must be structured, pipelines must be reliable, models must be accurate, and governance must be built in. That picture is correct, but sometimes incomplete. As a UX Designer with a particular interest in HCI (Human & Computer Interaction), my job is to ask questions from a human perspective: What does the user think about this? Inspired by an article published as early as 2022 by Tita Alissa Bach et al. (which, admittedly, is an eternity in “AI time”), I share a few reflections on the topic here. 

I see a fairly clear direction for how AI-capable infrastructure is being built. There is an expectation around how data should be structured, how workflows should be orchestrated, and how governance and process thinking should be viewed as a starting point rather than something that can be added afterwards. Many organizations are still on this journey, but the direction is nonetheless defined. There is a growing consensus around what “good data structure” looks like, even if few have fully arrived. What remains unclear is something else entirely: how people adapt to this technical structure once it is in place. Technical readiness answers the question: will the system work? Human readiness answers a different question: will people rely on it? These questions do not automatically align. 

In a recently published study by Anthropic involving over 80,000 users, AI is consistently described as positive and useful. It saves time, accelerates tasks, and expands capabilities. But alongside these statements, concerns about reliability and trust appear in most responses. Users adopt AI, but selectively. They use it where the cost of error is low and hesitate where the consequences are greater, according to the study. 

This pattern is repeated in other independent studies. At the organizational level, McKinsey & Company reports that most companies are now experimenting with or implementing AI, but largely at the pilot stage. The technology exists, but organizations are only scratching the surface. My interpretation is that the technology itself is perhaps not the problem, but rather people’s trust in the machine’s ability to deliver correct output. 

At the societal level, extensive research from KPMG with 48,000 respondents shows that adoption and skepticism are increasing in parallel. People see the benefits of AI while continuing to question its reliability and implications. Optimism and concern seem to coexist rather than resolve over time. 

Across all these perspectives, the same structure emerges. AI adoption is broad, but shallow. Capabilities expand, but the boundaries of use cases remain. AI systems are designed to optimize correctness. Users live in a world defined by uncertainty and risk. For humans, the relevant variable is not whether the system is generally correct most of the time, but whether it is safe to rely on in my specific context. 

This leads me to sense a usage pattern. Tasks that are repetitive, low-risk, and easy to verify are quickly delegated to AI. Drafts, summaries, and initial analysis are common examples. Tasks involving judgment, accountability, or reputational risk remain human-controlled, regardless of how capable the system becomes. For this reason, I do not believe this boundary will shift automatically with improved performance. And even if output quality increases and user effort potentially decreases, that does not necessarily affect the conviction required to trust the system in critical situations. As a straightforward example: if your building is on fire, would you trust an AI to tell you what to do? Or in your next project at work, how good does the AI have to be before it alone is allowed to place that million-euro order? 

Trying to solve this through governance and transparency is, of course, necessary. Traceability, verifiability, and explainability are essential components of any robust system. But they do not translate directly into user trust. The issue may be that AI is a technology expected to replace existing solutions. User trust is shaped through experience: how predictable the system is, how often it fails, and how costly it is to verify its output. This can create a gap between what the system is theoretically capable of and how it’s actually used. 

My take is that from a technical perspective, the direction is becoming increasingly well defined. From a HCI perspective, it remains highly fluid. The result is that even well-developed systems risk getting stuck at a suboptimal level. They are used where they feel safe and avoided where things are critical. And with reports of AI contributing to errors at both Meta and Amazon, the willingness to hand over operations to this new technology is held back. 

The implication is not that the technical work is done. Many organizations still need to reach a level of data and system maturity where AI can function reliably. But even when that level is reached, it does not resolve the HCI issue. The trust issue. Technical readiness enables potential. Human trust in the technology determines its realization. Until both are addressed together, AI risks continuing to scale in potential rather than in actual impact. 

Sources 

A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective

Anthropic – 81k interviews / usage study

McKinsey & Company – State of AI

KPMG – Trust, attitudes and use of AI

A rogue AI led to a serious security incident at Meta

Amazon’s Blundering AI Caused Multiple AWS Outages

About the author

Experience Designer | Sweden
At Sogeti most of my projects involves product design, UX and customer behavior. I always aim to be a good listener and an innovative problem solver, using my experience in graphic design and UX, I try stay open to an ever-changing and increasingly digital world.

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit