We IT guys love fancy names. Agentic: that sounds so cool, so mysterious. We imagine the super-high technology behind it. We feel the pride of the elected one that has the privilege to work in this field.
A long time ago we started to develop satellite systems around core ERPs, it was a massive trend in all industries. Developers were interrogating the core system via APIs, message queues or full proprietary protocols. Who remembers APPC from IBM? Based on the response those programs (as named in the 90s) were taking action. In the automotive industry those actions could feed a display panel or even better move a robot.
I feel sad for those guys: they were doing fancy stuff that they could only name “programs”. Back in the ’90s, agents wore trenchcoats, Wayfarer sunglasses, and of course had the newspaper of the day.
In this new age of AI, developers—or power users—build systems that interrogate an AI engine and take actions based on the result. Or they are creating systems that use AI help, building a prompt that depends on inputs from a database, a web form, or another system. But they aren’t creating programs. They are part of another league: they proudly create agents. Agentic AI is their world.
Could you imagine integrating an API via a manual copy-paste of the result in another system or database? That wouldn’t make sense, would it? Same applies to agentic AI. It’s not an evolution, it’s a necessity: once you have spent time playing with the prompts and manually acted based on AI responses, it’s time to automate the process and code actions that depend on the prompt result. There’s no magic here, just structured common sense.
I’ll stop here this war against the agent hype: after all, IT-history is all about buzz words. What if this terminology had some meaningful roots? The Agent terminology has been long used in the IT-world. Usually, agents are small programs in the periphery of larger software, acting on behalf of it. They are processing simple or complex actions by delegation, usually in a different location than the main system (close to a workstation, a database, network equipment). This work is driven by decisions taken at the host level.
And guess what? AI Agents have some similarities with this pattern. Basically, AI APIs just respond to prompts with limited access to an outside world. AI by itself can’t trigger any actions on your environment, or on AI itself, to dig deeper into a problem. Let’s take a very basic example and assume you run a hotel and would like AI to compile all negative reviews you have on social media so that you can manage some improvements.
For this you write a service called “Issues summary” in your favorite language (Java,.Net, PHP, Python, Javascript… it could also be low code).
- This service interrogates the social media APIs and collects all reviews from the last 6 months.
- Those entries are structured in a file given to AI with a prompt asking to build a structure list with ranking out of 5, sentiment analysis of the writing on 5, and reviewer ID.
- From the resulting list, “Issues summary” extracts reviewer IDs from the reviews having 1/5 or 2/5 rating or a bad sentiment analysis (people can kill you in a review, even with a 5!).
- You call another service called “professional hater check” that tells you if the IDs commonly create bad reviews. If so, you remove the corresponding review from the list as you don’t want to take into account reviews from people who hate the entire world.
- “Issues summary” uses the “professional hater check” return to filter out meaningless reviews: now you have a result that you can really exploit.
And you know what? You just wrote a multi-agent system with plain old vanilla code.
However, despite this nice result you will just start experiencing the real problems:
- Who can access this service?
- How autonomous is it? Does it need a human check? What will it cost?
- Is there a risk disclosing information?
- What’s the risk of a bad decision?
- …
There are many questions related to governance rather than technology. Accelerators such as agent builders make it easier to implement the right governance. In fact, that’s what justifies the fancy “Agent” name. Agents are not just programs that automate the creation of prompts and the generation of actions based on the response. Due to the very specific nature of the AI API called in the middle, they need to be much more than that.
At the end of the day, a good Agentic AI system is a robust architecture serving the business needs while enforcing the governance required by AI usage. Pushing to production basic scripts found on the Net is a recipe for disaster. Think of Agents as entities controlling inputs and outputs before acting. They are just like the guy with the trench coat prompting his contact with a passphrase like, “Was your soup OK?” And waiting for “too salty”. The only difference is that nobody will fire a 9mm bullet at you if the answer is wrong. At least for now…