Trusting a machine

A sentence caught my eye because it sounds so self-evident: Humans must decide how far they want to trust AI.

I got stuck on the word. Trust.

Trust is something that develops between people. It has to do with intent. With the assumption that someone is well-meaning toward you, or at least not deliberately working against you. Trust isn’t a technical term. It’s a human one. It presupposes that on the other side there’s someone who chooses how to treat you.

A machine doesn’t choose. It computes. That’s a fundamental difference, and the word trust makes it invisible.

If I suspect my calculator gave me a wrong result, I don’t say: I don’t trust it. I say: It’s broken. Or: I typed it in wrong. The tool has no intent. It works or it doesn’t. Trust is the wrong category.

But with AI, we use the word. Constantly. Trust in AI. Trustworthy AI. The entire regulatory debate revolves around it. And with the word, we import a whole framework of expectations that doesn’t fit.

When I trust someone and they disappoint me, it feels like betrayal. When a tool fails, it feels like a malfunction. The emotional charge is completely different. But when we place trust in AI and it delivers a wrong result, we don’t react the way we would to a broken tool. We react the way we would to a betrayal. We feel deceived. Even though there was nobody there who wanted to deceive us.

The common argument treats trust as a scale. Low trust, high trust. More trust after good experiences, less after bad ones. That sounds rational. But it misses what happens when you trust something that has no consciousness.

I’ve seen this in companies. Teams that rely on AI results because they look good. That stop questioning, because the machine has been right so far. That’s not trust. That’s habituation. It feels the same, but the mechanism is different. Habituation makes you sluggish. Trust makes you vulnerable. Both are dangerous, but in different ways.

What I’m missing is a language that draws the distinction. Instead of “trust in AI,” we could say: How reliable is this tool in this context? That sounds less elegant. It’s also less human. But that’s exactly the point. It’s not a human.

The personification of technology isn’t new. We give cars names. We curse at computers. But with AI, personification has consequences. Because the outputs are linguistic. Because they sound like they come from someone. Because the form is human, even when the content is mechanical.

The standard recommendation is to build trust gradually. Small tasks first, then bigger ones. When the AI proves itself, more responsibility. That’s the same pattern you use to onboard a new employee. The language no longer distinguishes between a person and a tool.

I think that’s the real point. Not whether we should trust AI. But that we’ve stopped distinguishing between a tool and a counterpart. And that a word like trust quietly erases that difference.

The question isn’t how far we want to trust AI. The question is why we’re thinking in those categories at all.