Skip to Content

THE AI SYNDROME – A DIAGNOSIS 

June 23, 2025
Marco Venzelaar

AI syndrome, with AI standing for grading for Acute Interruptions to Actual Innovation. So many voices speak out about AI, from technology geeks and tech visionaries to doomsday thinkers and conspiracists, but in the middle is the wider society. Most of us have now come “in contact” with AI in one way or another. However, AI is not something new; AI has been talked about for a long time, and promises have been made and broken. Now that AI is in the wider world and has “escaped” the confines of supercomputers and is beyond the use by only technology geeks, it is time to talk about its use so far. I will confess upfront that I am not an AI expert, but I am a technology geek with plenty of experience, and AI is around me in my professional life.

AI is in the news daily, especially if you follow the technology media, which has a constant stream. Statements made about AI by plenty of product owners, AI developers and industry experts suggest that AI can do pretty much anything. Some say that AI will make life easier by taking over “boring” tasks and letting you get on with your real job. Some people extrapolate that out even further and predict that AI will start the next world war or even Terminator-esk will take over the world. With such wild statements, where do we go with AI?

With so many uses of AI being shouted about, let’s look at some of those and let’s start with the more sensible uses of AI where it can assist doctors in identifying cancerous cells from x-ray photos but also be the extra eyes for undiagnosed symptoms (article), to the use in detecting spruce bark beetles in Swedish Forests (article) and validating property tax laws in France (article).

New technologies are often used to experiment with how far the technology can be taken and sometimes enter more questionable use cases of AI by having AI voice bots responding to support call centre callers (article), support human resources with paperwork and sift through CVs (article) and even Police using it to identify individuals in a crowd (article). We also know that fake news can be generated to look more genuine using AI, and we have seen it flood the media channels – probably more the social media channels than the media corporations. Other examples are where AI is making decisions on, for instance, mortgage lending and insurance.

But it is also going too far, and not just in a computer lab. Recent examples show that it made it into government policies, where it was offering non-existing research (article). Arguably worse is that it even appeared in a major court case, quoting case law that didn’t exist. While the lawyers initially denied using AI, it turned out that during the research for the case it was used (article).

Forms of AI were used in the military a while back, probably not called AI at the time. With the advances it has made now, you can see the darker side of AI starting to develop, especially since the price has come down, and AI can be used in whatever you can. The film War Games came out in 1983, and we understood it then and maybe thought it would not be as bad. The year after, the film Terminator first came out, which did paint a very doomed scenario. The Terminator is often used as a reference on how bad it will/could be.

We have reached a point where AI is to make a significant societal impact; some might say we have already passed that point. It is, therefore, no longer just a technology question where tech geeks can show off their prowess; it has now become societal question, and therefore as a society we now need to ask where it is sensible to apply AI. We need to move further than if it is technically and financially possible or if there is enough data to create the model. We even need to go beyond the question of AI models using copyrighted materials. Our questions should be: what is the societal impact of using Artificial Intelligence? As a society, things like copyright and privacy are important but also sustainability and responsibility to humanity where AI interacts or makes independent decisions.

We know that if technology company are the “gatekeepers”, over time their aim will change from showing off their technology/capability to making money of it. I am not professing that AI cannot be allowed to be profitable, but it cannot be the only requirement.

The lifecycle of technology seems to be repeated each time a new technology appears on the scene. Look at for instance social media networks, where it was initially all aimed at connecting family, friends, groups and so on, only to lose its purpose when it became over monetised and misused.

Generally, the technology is not created for financial gain initially but is never far away. The Internet grew from a need to communicate between scientists and AI is used to advance long running and sometimes difficult tasks (also in the science world). If AI is making it easier to analyse Gigabits of data, then absolutely use AI.

Organisations hold a lot of valuable information when it comes to Development Life Cycles, and that will only increase with time. We know when a project is overdue, and most times we will know why at a granular level (if we are honest with ourselves). The information can also tell what defects were indeed defects and even where root causes were. Those information items are generally clearly defined within the boundaries of the organisation. This makes it a good use case to introduce AI and support all team members with in-depth analysis (Sogeti’s Amplifier is one of those accelerators).

Organisations might continue to strive for more efficient processes including having AI start to deliver code fixes or build applications. While developing code is generally using structured languages and therefore allow AI to help (or take over), one should not forget that the developer brings a creative element to process that is hard to replace (article). Lazy coders might use it to string together a set of AI suggestions into some functionality, but they will still need to spend time to ensure what they strung together is a. working and b. exactly what was asked for.

More recent Microsoft and Google have been putting AI on top of their search results with Google even going as far as to say that the future of Internet searching will ONLY be through a chat bot and only give you one response (article). This starts to turn a Google search into the Oracle which knows everything and is always right, right? We must be critical about this…. Is the Internet right all the time? Especially if models are trained on the “Internet” out there…. We know what the Internet is full of great sources of information but at the same time there are more sources that get it wrong. I would be great to understand how the AI knows what is right and what is wrong, it can’t be based on most clicks or the most appearances/quoted. Facebook now looking at all public Facebook and Instagram posts. But we know social media is not always a balanced place. This opinion piece in “The Register” spells it out quite well, GIGO – garbage in garbage out.  I would also say that that is also why we teach our children from schoolbooks and not let them loose on the internet to find their own answers.

AI is brilliant (tech-geek speaking here), and I argue that we are already at the point that we cannot live without it anymore. So, what do we do to get the best out of it. We need to be clear of the impact of each implementation.

Overall, technologies tend to  advance society, either a sub-group or the whole of society. Those ambitious goals are often what drives those that are standing at the inception and initial development of these technologies. Plenty discoveries won’t make it, but if they stand the test of being useful and/or beneficial it can take off. If the discovery is then also commercially viable, it really takes off. We saw that with the Internet, slow at first but when it became accessible for the wider society and commercially viable it became widespread and now pretty much everywhere – to the point we can’t life without it. The same path is now being followed by AI, it is commercially viable, and you are seeing it pop up across the wider society now.

But with success also comes failure, failure of the intended use so to say. We know that the Internet has made us more vulnerable of being hacked where our personal information can be stolen/misused for nefarious reasons, and we see the same start to happen with AI. So, it is easy to say, pull the plug, but as the saying goes “The horse has bolted”, so closing the door won’t make much of a difference.

While we have seen the good, the bad and the ugly and will continue to see that in the future, we should start to look at how to use this technology properly. We need to be realistic, something that is certainly easier said than done, especially if people from either extreme end are involved in the conversation.

It needs to make sense where we apply AI, which goes off course for all technologies. With the outsized impact AI will have, be specific of it use and truly understand its impact on the users and end-users, the latter is especially important. Internet banking revolutionised access to banking services and even making payments a lot easier. But not everybody has access to internet banking even today. Even for those that do have access, there are some that simply cannot use it for what they need their bank to be for them.

And yes, there might be a need for some ground rules. We know what unfettered growth can do in a technology. We all have a responsibility, even politicians to figure out what those rules are. The UK Information Commissioner’s Office (ICO) has started to draw up a strategy for using AI in facial recognition (article).  The UK Government has also started to draw up guidelines (article) but leaving in the middle if this goes too far or not.

One last point – and this is an extension of the societal impact – we need to ensure that we do not leave people behind. It is easy to say that technology is advancing so much in life and AI no doubt will add to that (maybe even exponentially). There are real opportunities to support people with disabilities or specific needs. Language translation applications were already rather good, but AI has the potential to remove language barriers. We need to ensure that these groups are not forgotten with every application they interact with. We do see many elderly people left behind. Many services are being digitised and moved online, making it difficult for them to access them. AI can easily alienate the elderly even further.

So, when you talk with colleagues, clients, and vendors, remember that Google started off with the fantastic corporate phrase …. “Don’t be evil” was their corporate statement. Unfortunately, that has not stood the test of time. To encode an AI with “Do no evil” might not be the best approach …… please learn from the past and apply this when building AI models and their applications!

Oh, and please consider the above too when the latest AI industry buzz pops up on your radar…. AI Agents. If we are still trying to figure out AI, how are we able to allow AI to make decisions for us that being simple or critical decisions?

About the author

Managing Consultant | Test Automation | UK
Marco builds lasting relationships with clients and tool vendors to provide our customers with a full and more importantly practical overview of how to enable test tools to its fullest capability and integrate them into the business processes.

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit