Ontology and Agentic AI Follow the Same Principles, but Deliver Different Results
Ontology is the study of what exists and how it connects. In computer science it becomes more specific, especially when artificial intelligence enters the picture. Then you get a formal representation of concepts and their relationships to each other. The questions to answer are: which information and data are available, how are they connected and what conclusions can we draw from them?
After many years of developing knowledge systems, and with the help of AI, the precision of queries and the resulting usefulness have risen considerably. And with that come entirely new possibilities.
What I generally do is applied ontology. I develop a structure and use it as a system to get from data to a result in a systematic way. Not organizing knowledge for the sake of organizing and creating yet another wiki, but organizing knowledge so that decisions can be drawn from it as directly as possible. What initially reads like a nuanced shift changes everything. A description of what exists becomes a real tool. Modern chips and computing infrastructures amplify all of it.
Five Principles
Connecting an AI agent system with ontological structures answered my question about the principles right away: in my view, the principles behind the conventional analysis of scientific studies using ontology and the principles behind agentic AI are the same.
Decomposition. Breaking a complex problem into parts. For many years I advised US companies that wanted to expand into Europe. That is not one decision to make but twenty. And they are all interconnected. Regulations, distribution channels, pricing structures, logistics chains, target audiences, languages, competition, product variants. Each topic is solvable on its own, together they form a system. An AI agent does the same thing: it takes a task, breaks it into steps and works through them in parallel.
Relationships. Understanding how the parts connect. Regulation affects the pricing structure, which determines which distribution channels are realistic, which in turn has a direct effect on the product-audience matrices. And depending on the target audiences and product assignments, tactical marketing execution looks completely different. When I map these connections ontologically to make them understandable, these relationships have to be explicitly defined. An agent recognizes them too, but it does not define them. It finds them and works through them.
Dependencies. Knowing what has to come before what and what follows from it. You cannot build a pricing strategy before you know the markets inside out. And no marketing plan is effectively actionable before you have understood the target audience completely. The sequence is not optional, it is essential. In an ontology the sequence is always hard-wired and usually quite rigid. AI agent systems often decide the sequence themselves while working and therefore appear somewhat more flexible.
Context. Results depend on conditions. What works in France does not automatically work in Germany. Marketers know this and no longer tear their hair out over it, because by now everyone recognizes that these two entirely autonomous markets need two completely different marketing approaches. But careful: what applies to a consumer product does not apply to medical technology. I once developed a training system for dental filling materials for the 3M ESPE group and learned very early that ontology maps context best as a completely separate layer. An interesting quality of AI agents is their ability to learn (usually correct) contexts from the data you give them.
Traceability. Knowing where a decision comes from. Why do I recommend direct sales over retail? Take the American lifestyle jewelry maker QALO: the margins in that segment were already rather low for brick-and-mortar, and the target audience had been trained toward online purchase through highly scaled individualization (engraving of plastic rings). In the ontological value chain at the time, this became visible quickly. When I later repeated the experiment with AI agents, it was not immediately clear. The path is transparent despite the dynamic behind it and you can follow every step. But with complex decision chains, it devolved into too much trial-and-error and simply took too much time to reach decisions. For simple representations, you get to the result quickly. For anything beyond that, not yet. Given the speed at which agentic AI is developing, that may not stay that way for long.
The Unresolved Tension
The principles are the same. But the results are not.
A well-thought-out ontological model works. I know why the result comes out the way it does and not differently, I can explain it to someone and they understand it. When something changes, I know where to correct it. It follows my plan. And that is why it is traceable.
Agents mostly do what they want. Unless you force them into a rigid framework, but then it is no longer agentic AI. That sounds odd, but it is my experience after countless trial-and-error attempts. I work with AI agents every day and the result gets better, sometimes significantly better than what I would have developed myself. But it does not follow a predetermined plan. The agent finds its own way, and I do not always understand why that particular one. But as I said, the algorithms are evolving and it is probably only a matter of time.
The strengths of AI agents are clear: the agent sees connections I miss and tries out paths I would not have thought of. It is faster than me and has no problem running through twenty variations before it decides.
The problem: I cannot always explain the result. And if I cannot explain it, I cannot sell it. Because the uncertainty factor is simply too large. And companies need all the more certainty in their internal planning the more uncertain the markets are.
So What Is Better?
The catch is the question itself. Because it is not about better or worse. It is rather about what you can control and what you cannot.
My ontology I control myself and can make any adjustment I want, when the team and I consider it sensible or necessary. I built it, know every connection in it and know where it is strong and where it has gaps. When something goes wrong, I know where to look.
Agents I do not control. I can give them context and guardrails. But what they make of it is different every time. At that point the agents are no longer a team, but look more like a pack of uncontrolled egomaniacs who have to force themselves through with all their might in favor of the best result. That sounds unfair now and does not really hit the core, but when you watch an AI agent system live at work, that is not only my impression. But the fact is: it does not produce perfect results. On the contrary, when fully automated, at least as much goes wrong here.
So the right question should be: when will agentic AI deliver a perfect ontological model? When will the agent build the structure I build by hand today, on its own? Faster and maybe better? I work with AI every day and still do not have an answer that convinces me. Because AI agents act too wildly. The task of the developers would therefore be to tame the algorithms.
How these texts are written is explained here.