There’s a lot of interest in and concern around the use of AI agents. For organizations grappling with whether and how to use agentic AI, I recommend considering the model from the perspective of complex—rather than complicated—systems. Indeed, accepting the fact that agentic AI is complex rather than complicated will be key to harnessing its power and applying necessary protections and controls.
What’s the difference between complex and complicated? Computer science, for example, involves complicated systems—relating to cause and effect from an engineering perspective. Anthropology, on the other hand, involves complex systems—where you can’t control every variable and you have to focus instead on “factors,” as they call them in finance.
In complex systems, we have confidence intervals about what we think is happening. We can be, for example, 60% sure or 85% sure, but we can never be absolutely sure. Often, we can get to the right answer for the wrong reasons. We can even get to the wrong answer for the right reasons for any outcome below our confidence interval. Outcomes are innately multi-variate, and it’s impossible to know why they turned out the way they did.



