Manipulation Risk as a Subpoint

In most texts about AI in marketing, there’s a section on risks somewhere. It sits between the sections on monetization and scaling. It covers manipulation risk, psychological effects, ethical concerns. It’s maybe thirty lines. Relative to the rest of the text, that’s about as much space as you’d give a known bug in a release notes document.

That’s not an accident. It’s how the AI industry thinks about morality.

Moral questions are treated like technical problems. There’s an issue. It gets logged. It gets a priority. Someone writes a fix. In the next version it’s better. That works for software. For moral questions, it doesn’t.

The manipulation risk being described isn’t something you can patch. It’s not a bug in the system. It is the system. When you build an AI that influences people, manipulation isn’t a flaw that occasionally occurs. It’s the core function that occasionally goes too far.

But the industry frames it differently. It lists risks the way you list specifications. Manipulation risk. Check. Low self-esteem. Check. Unrealistic expectations. Check. Then come suggestions: transparency, regulation, education. And then it moves on to the next section.

In my work I’ve seen often enough how risk assessments function. You write down the risks. You evaluate them. You assign measures. And then you check them off. The risk is documented, so it’s managed. Not solved. Managed. That’s enough.

For software that crashes, maybe that is enough. For technology that shapes the self-image of young people, it isn’t. But the people who build these systems use the same framework. They come from the tech world, and in the tech world there’s a solution for every problem. If the solution doesn’t exist yet, it will soon. In the meantime you document the problem and move on.

Ethics as a checklist. Transparency as a feature. Regulation as a roadmap item.

What’s missing is the possibility that there is no solution. That some problems aren’t solved by better technology but only by the decision not to do something. That possibility doesn’t exist in this thinking. Every problem has a next step. The idea that the next step might be standing still doesn’t come up.

I think of a conversation I had years ago with an engineer. It was about a product that was technically possible but questionable. He said: If we don’t build it, someone else will. So we might as well build it better than the others. The argument sounds pragmatic. In truth it’s an abdication. It says: I’m not responsible for what I build because someone else would build it too.

The industry argues the same way. The risks are there, but the technology is coming anyway. So better to shape it than to ignore it. That sounds reasonable. It even sounds responsible. But it assumes that shaping is enough. That you can shape a technology whose core function is influence in such a way that the influence only produces good outcomes.

I don’t know of any technology where that has ever worked.

The problem isn’t that the risks are hidden. They’re in the texts. The problem is the architecture of the texts. The risks are a subpoint. The opportunities are the main topic. The weighting says more than the words. It says: The business is the topic, the morality is the footnote.

When you cover manipulation risk in thirty lines and monetization in thirty pages, you’ve made a decision. Not with words. With proportions. And proportions don’t lie.