Trusting technology
One sentence bothered me, because it comes across so harmlessly and sounds so self-evident: humans must decide how far they want to trust AI.
The word trust caught hold of me and wouldn’t let go.
Trust is something that arises between people. It has to do with intention and with the assumption that someone is well-meaning towards you, or at least isn’t deliberately working against you. Trust is therefore not a technical term but a very human one. It presupposes that on the other side there is someone who decides how to deal with you.
An algorithm doesn’t decide, it calculates. That is a fundamental difference that the word trust renders invisible.
When I attribute a wrong result to a calculator, I don’t say: I don’t trust it. I say: it’s broken. Or: I typed in something wrong. The tool has no intention, it works or it doesn’t. Trust is the wrong category here.
But with AI we use the word more and more often. We trust AI, AI is trustworthy. The whole regulatory debate revolves around this. And with the word we import an entire framework of expectations.
When I trust someone and they disappoint me, it feels like betrayal. When a tool fails, it feels like a malfunction. The emotional charge is completely different. But when we give an AI chatbot our trust and it delivers a wrong result, we don’t react like we would to a broken tool. We react like it’s a betrayal. We feel deceived. Even though there is no one who wanted to deceive us.
The usual argument treats trust as a decidable criterion. Little trust, lots of trust. More trust with good experiences, less with bad ones. What sounds rational at first glance overlooks what happens when you trust something that has no consciousness.
I see this more and more regularly in companies. Teams that rely on AI results because they look good also stop questioning the result, because the machine has been right so far. That isn’t trust. That is habituation. It feels the same, but the mechanism behind it is different. Habituation makes you slack, trust makes you vulnerable, but both can become dangerous in their own way.
What I miss is a language that makes the difference clear. Instead of “trust in AI” we could say: how reliable is this tool in this context? That sounds less elegant, isn’t as human, and reduces the risk of mistaking human for algorithm.
The personification of technology isn’t new: we give cars names or curse at computers. But with AI personification has consequences, because the outputs are linguistic and therefore sound as if they come from a human. The form is human, even when the content is machine-made.
I often hear that over time we build trust while the results keep getting better. First small tasks, then the larger ones, and when the AI proves itself, we give it more responsibility. That’s the same pattern you use to onboard a new employee. So the language no longer makes a distinction between human and tool.
I think the real point isn’t whether we should trust AI, but that we are soon going to stop distinguishing between a tool and a counterpart. And that a word like trust lets this distinction quietly disappear. The question isn’t, then, how far we want to trust AI, but whether another term might fit the technology better: distrust.
How these texts are written is explained here.