Ethics Sections as Liability Disclaimers

It’s always the same pattern. In AI strategies, in whitepapers, in consultant presentations. An ethical problem is named. Bias in algorithms, discrimination through data, lack of transparency. Then a paragraph that sounds like responsibility. Regular audits. Ethical guidelines. Responsible use. And then, without transition, the next feature.

These ethics sections read like terms and conditions. You know they’re there. You know you should have read them. And you know nobody expects you to.

I’ve written terms and conditions. In a previous professional life, as a product manager. I know how they’re made. A lawyer drafts what protects the company. Not what protects the customer. The language sounds neutral. The function is one-sided. The goal is not understanding. The goal is documentation. In case of a dispute, there should be proof that the user was informed.

The ethics sections in AI strategies work the same way. They inform. They don’t address. The difference matters.

Addressing would mean: Here is a specific problem. Here is what we know about it. Here is what has been tried. Here is why it didn’t work. And here are the questions we still can’t answer. That would be honest. That would also be uncomfortable. Because it would show that the problems are real and that there are no easy solutions.

Instead, you get: Companies should conduct regular audits to identify and fix bias in their AI systems. That’s a sentence. It contains not a single concrete element. Which audits? How often? By what criteria? Who conducts them? Who pays for them? What happens when the audit finds bias? Is the system shut down? Adjusted? Kept running?

None of that gets answered. Because the answers would be inconvenient. Because they would interrupt the sales pitch. Because anyone who honestly writes about the limits of AI ethics gets less attention than anyone who pretends ethics is a configuration step.

I’ve made a habit of marking the ethics sections in documents like these and then checking: Does this section lead to a concrete action? Does it change anything about what was recommended before? Does it limit a recommendation? The answer is almost always no. The ethics sections are ornamental. They change nothing about the substance. They stand there like a fire extinguisher hanging on the wall that has never been inspected.

What bothers me is not the superficiality. It’s the function. The ethics sections are not there because anyone cares about ethics. They’re there because an AI document without ethics sections would be vulnerable to criticism. It’s protection. Like a disclaimer at the end of an ad. May contain risks and side effects.

Real ethical engagement is hard work. It demands tolerating contradictions. Saying: This tool can discriminate and we don’t yet know exactly how to prevent that. That would build trust. Not because the solution is there, but because the honesty is.

What happens instead is a simulation of responsibility. The words are there. The substance is missing. And the reader who doesn’t look closely walks away feeling that ethics was considered. It wasn’t. It was mentioned. That’s not the same thing.

If you write an ethics section that changes nothing about the rest, you haven’t written an ethics section. You’ve taken out an insurance policy.