Perfection as a Trust Killer
I scroll through the Instagram feed of an AI influencer cited as a success story. The face is symmetrical. The skin is flawless. The lighting is always right. Every image looks like the cover of a magazine that doesn’t exist. And after the third image I scroll on because I’m bored.
The proponents of this technology describe at length why perfect AI influencers work. Control over every pixel. No bad days, no unflattering angles, no embarrassments. The perfect product for the perfect campaign. Pages are written about this advantage.
Then comes a paragraph, almost in passing, stating that this perfection can be off-putting. People don’t trust the flawless. They find it uncanny.
I always read paragraphs like that twice. Because the people who write them don’t notice they just contradicted their own argument. Or they notice and hope the reader skips past the contradiction.
Perfection doesn’t build trust. Anyone who has ever sat across from an overly polished salesperson knows that. The guy where every sentence lands, every smile is timed, every answer sounds rehearsed. You might buy anyway. But you don’t trust him. You know something’s missing. You just don’t know what.
What’s missing is the mistake. The moment someone stumbles, corrects themselves, laughs because something embarrassing happened. The moment that shows: There’s a person here. Not a surface.
Trust doesn’t come from competence alone. Trust comes when competence and vulnerability meet. When you see that someone is good at what they do and can still make mistakes. That they’re not perfect. That they’re not trying to be.
With an AI there is no vulnerability. There are only parameters. And if someone decides the AI should make mistakes so it seems more human, then the mistakes are planned. A planned mistake is not a mistake. It’s another feature.
The proposed solution: Building in deliberate imperfections so the AI influencers seem less uncanny. A pimple here. A blurry photo there. A small irregularity that says: Look, I’m like you.
But you’re not like me. You’re a calculation pretending to be like me. And the simulated flaw is worse than the perfection. Because it removes the last honest quality the AI had: being obviously artificial.
A perfect AI influencer is at least honest in its artificiality. You see it and you know: This isn’t real. That’s clear. You can categorize it. An AI influencer with built-in flaws fakes realness. It lies about the one thing that still made it recognizable as AI.
The dilemma rarely gets named. Perfection works as a product but not as a relationship. It sells but it doesn’t connect. And influencers are supposed to connect. That’s their business model. Reach through attachment.
So the AI has to become imperfect. But simulated imperfection isn’t imperfection. It’s another layer of deception. The problem doesn’t get solved. It gets shifted.
I think about people I trust. In every case, the trust had something to do with something the person couldn’t control. An honest moment. A reaction that wasn’t planned. A mistake they admitted even though they didn’t have to. Trust is what happens when control stops.
With an AI, control never stops. Every output is controlled. Every mistake is either a bug or a feature. There’s no space in between. And trust is born in that space in between.
A technology that can control everything. And total control destroys trust. The answer isn’t more control. But less control isn’t a business model. So the contradiction remains, unresolved, and most people will skip past it.