How much automation is acceptable?

There’s a question that’s rarely asked seriously in the AI debate. How much automation is acceptable?

It’s the right question. Maybe the only one that really counts. Not whether AI works. Not how to implement it. But: How much of it do we want?

The answer could take the whole discussion in a different direction. An honest direction. One that acknowledges that more technology isn’t automatically better. That there might be a point where automation should stop. That the answer to the question could be: less than we think.

But the answer that always comes is: a hybrid model. Humans and machines together. AI for the routine tasks. People for the important decisions. Finding the balance. The sweet spot. The right mix.

That sounds reasonable. It sounds balanced. It sounds like a compromise everyone can agree on.

It doesn’t answer the question.

The direction of automation

A hybrid model doesn’t say how much automation is acceptable. It says: somewhere in between. It shifts the decision from the fundamental level to the implementation level. Not whether, but how. Not how much, but what mix. The question about the limit gets replaced by the question about configuration.

I’ve worked on projects where hybrid was always the beginning. First the machine replaces only the boring tasks. Then the repetitive ones. Then the ones nobody wants to do anyway. And at some point, hybrid is just another word for automated, with a human who occasionally presses a button.

The direction is always the same: more automation. Never less. No company implements a hybrid model and then decides on less machine and more human. The economic logic pulls in one direction. Hybrid is the transition phase, not the destination.

Everyone in this industry knows that. The idea that the process stops somewhere, at a balanced point where humans and machines collaborate as equals, contradicts everything the history of automation shows.

What bothers me about this answer isn’t the answer itself. It’s what doesn’t come up. The possibility that the answer is less. That acceptable doesn’t mean more automation with human oversight, but less automation, because certain things shouldn’t be automated.

That possibility doesn’t exist in the debate. Not as an argument, not as a counterposition, not even as a question. The discussion only knows one direction: more. The question how much gets answered with in what way, and the direction is set.

Take customer service. AI handles the simple inquiries. For complex cases, a human takes over. Hybrid. Sounds good. But what happens when the AI gets better? What happens when it can handle the complex cases too? Will the human be brought back in, on principle? Or will they be rationalized away because the machine is cheaper?

The question answers itself. And nobody asks it.

How much automation is acceptable? The honest answer would be: We don’t know. We have no criteria for it. We have no framework that says: Here’s the line, beyond it a human should stand, not because the machine can’t do it, but because it wouldn’t be right.

Nobody gives that answer. What they say instead is: hybrid. And hybrid is the word you use when you don’t want to answer the question.