Skepticism Is Not a Bias

In nearly every AI implementation, a term shows up sooner or later. Status quo bias. The tendency to stick with the familiar, even when the new thing would be better. Consultants use it to explain why employees resist AI. It’s a cognitive distortion, they say. A psychological pattern. Something to be overcome.

Employees who are skeptical don’t have instinct. They have a bias.

That’s a remarkable reframing. You take people who have been working in a process for years and know it inside out, and you declare their concerns a thinking error. Not a signal. Not a perspective. An error in their perception that needs to be corrected.

The same consultants who diagnose this bias in employees regularly choose against AI tools in their own work. For texts, analyses, strategy papers. Because the output doesn’t deliver the necessary quality. So they themselves have experienced that AI isn’t good enough in their field. They were skeptical of the AI output and chose the human path.

Is that status quo bias? Or is that an informed judgment?

The question answers itself. For the consultants it’s professional judgment. For the employees it’s a bias. The difference is not in the quality of the assessment. It’s in the position. Those at the top judge. Those at the bottom distort.

I’ve seen this many times. In companies that digitize, automate, transform. The people on the shop floor, in customer service, in administration say: This won’t work like that. And then a consultant is flown in to explain why that’s resistance and how to manage it.

Two years later it turns out the administrator was right. The process that was supposed to be automated had dependencies that weren’t visible in the process diagram. The customer system worked differently than the documentation described. The exceptions that were supposedly rare happened daily. The administrator knew that. The consultant didn’t. But the administrator had a bias and the consultant had a contract.

Skepticism is not a bias. Skepticism is the natural response to a promise that hasn’t proven itself yet. It’s not a pathology. It’s reason.

There’s a simple test. If consultants themselves are skeptical of AI output and defend that as a professional decision, then they have to grant the same right to the employees they advise. If a warehouse worker says the AI-driven inventory planning doesn’t match reality, that’s not status quo bias. That’s field knowledge.

The pattern is always the same: resistance to AI is explained, never analyzed. Nobody asks what the skeptics know. What they see that others don’t. Whether their objections might be data the model doesn’t contain.

In a smart organization, skepticism would be an input. Someone who says “This won’t work” is delivering a hypothesis. You can test it. You can disprove it. Or you can find out it’s correct. But for that you’d have to treat it as a hypothesis first, not as a deficit.

Instead, a psychological concept that actually comes from behavioral economics and has a specific meaning there gets slapped as a label on every employee who isn’t immediately enthusiastic. That’s not analysis. That’s rhetoric. It serves to end the discussion before it begins.

Consultants can afford the luxury of being skeptical. They sit at their desks and decide: AI isn’t good enough here. For the employees they advise, that luxury isn’t available. They’re supposed to accept. And if they don’t, they have a bias.