How much automation is acceptable?
There’s a question that comes up often in the AI debate but mostly goes unanswered. How much automation is acceptable and when does it become untrustworthy or even dangerous?
It’s the right question. Not whether AI works or how to implement it, but how much of it we want.
The answer could take the whole discussion in a different, honest direction in which we acknowledge that more technology doesn’t automatically mean better and that the answer to the question could be: less than we think.
But the answer that always comes back is a hybrid model. Humans and machines act together and share the tasks. AI takes over the routine work, humans the important decisions. So it’s about finding the right balance.
That sounds reasonable. It sounds like a compromise everyone can agree with.
But it doesn’t answer the question.
A hybrid model doesn’t say how much automation is acceptable. It says: somewhere in between. It shifts the decision from a matter of principle to a matter of implementation. Not whether, but how. Not how much, but what mix. The question about the limit gets replaced by the question about the configuration.
I’ve worked on projects where hybrid was always the beginning. First the machine replaces only the boring tasks. Then the repetitive ones. Then the ones nobody wanted to do anyway. And at some point hybrid is just another word for automated, with a human who occasionally presses a button.
The direction is always the same: more automation, never less. No company implements a hybrid model and then decides for less machine and more human. The economic logic pulls in one direction. Hybrid is always a transitional phase but never the actual goal.
Everyone who works with AI knows this, no matter which industry. The idea that the process stops somewhere and stays in its sweet spot, where humans and machines collaborate as equals, contradicts everything the history of automation shows.
What bothers me about this answer is that it ignores this contradiction. Because the likelihood that history repeats itself here is higher than the principles of automation suddenly no longer applying.
Reality knows only one direction: more automation. Machines cost less than humans. The question about how much misses that reality.
Take customer service. AI handles the simple inquiries. For complex cases, a human takes over. Here, hybrid sounds good. But what happens when AI can handle the complex cases too? Will a human still be assigned, on principle, or will they be rationalised away because the machine is cheaper and works at least as well?
The question answers itself.
So, how much automation is acceptable? The honest answer would be: we don’t know. We don’t have criteria for it. We don’t have a framework that says: here is the line, behind it a human should stand, not because the machine can’t, but because it wouldn’t be right.
As long as an honest answer is missing, we’ll keep hearing: hybrid. And hybrid is the word you use when you don’t want to answer the question seriously. Or when you’re ignoring reality.
How these texts are written is explained here.