Skip to Content

Shadow-AI: to befriend or to boycott?

Mathijs van Bree
Jul 23, 2024

“Shadow IT refers to the utilization of information technology systems, devices, software, and services without explicit approval from the IT department. However, shadow IT isn’t inherently malicious; it often arises from a desire for convenience or customization.”

It’s a well-known phenomenon: Shadow IT. Due to the rapid adoption of artificial intelligence, Shadow AI is now rapidly becoming a reality for many companies. After all, getting started with ChatGPT, Midjourney, DALL-E or other generative AI tools is so easy. Still, users benefiting from their AI tools are no longer staying under the radar.

It may seem new, but nothing different is happening here. Shadow AI is in essence nothing more than a new variant of Shadow IT. Gartner earlier estimated that about one-tenth of IT households consist of Shadow-IT. Since this often comes with relatively expensive individual software licenses, the costs of Shadow-IT for large organizations can amount to 40% of their total IT expenditures[i]. With Shadow-AI, it’s not much different. According to a recent report from Microsoft[ii], 75% of the interviewees stated that they use Generative AI tools at work. Almost 80% of this group bring their own tools to work, often leading to Shadow AI. In comparison, this new variant of Shadow-IT is more complex and risky.

The risk of sensitive information leaking is higher, especially since anyone can use generative AI tools that are very handy or at least seem handy. Moreover, the AI applications typically involve more text data than a traditional application. Employees can upload numerous text documents to ChatGPT without knowing what happens to the data. It gets even worse if confidential IP ends up in a publicly available AI system for example . A global electronics company  codebase was leaked to OpenAI because employees used ChatGPT to find bugs in sensitive code. The risk of unintended copyright infringement is also higher since many AI models are trained on copyrighted data.

Since many generative AI tools like ChatGPT are free, it’s actually easy for anyone to get started. Sometimes, what generative AI can do in three minutes would have taken at least a week without AI. However, that same free AI tool isn’t always reliable because many tools are trained with outdated data. While generative AI is primarily a reasoning machine, it becomes a knowledge source that shouldn’t be blindly trusted. As a result, hallucinations and inaccuracies can quickly take over. This has been learned through various incidents at major companies.

Ethical aspects

Let’s boycott generative AI altogether then, right? That’s not an option. A ban doesn’t work and  even more importantly it canexclude talent development and innovation. The art is to create a plan that everyone can utilize responsibly and safely. That’s easier said than done because generative AI experts are still not all aligned.. This principle is not only applicable for out-of-the-box AI tools, but also for in-house developed AI applications. It often happens that such an application is developed on Google Cloud Platform, only to find later that it can only be operationalized in an Azure environment due to the existing IT landscape. Because many aspects of generative AI are stillunclear , like the ethical aspects, the involvement of many different stakeholders before the productionalization of the AI tool is important. With the right specialisms like AI architects, you can create  a group of specialists or a steering committee that will make sure the new AI tool fits the existing architecture.

Many AI proof-of-concepts are already being developed without a clear Gen AI architecture in place. In this way, technical debt can be built up  until an architecture is defined. However, foster these initiatives. The AI PoCs are ideal for learning from all (im)possibilities. This will ultimately also help establish an AI strategy and architecture. But don’t wait too long so that you keep  technical debt within limits. Such a plan doesn’t have to take numerous years to be developed. The generative AI landscape is rapidly evolving, so put processes in place to make sure adaptions can be made fast. However, having a good foundation is crucial. You can at least ensure the guarantee of privacy and security when using and creating AI tools. You’ll also be able to monitor where, what, and how much data flows in and out of the organization. If you lay a strong foundation, it’s even more attractive for users to bring generative AI out of the shadows and benefit from the innovation it can bring..


[i] https://www.everestgrp.com/in-the-news/dont-fear-shadow-it-embrace-it-in-the-news.html

[ii] https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part#section1


About the author

Mathijs van Bree

Artificial Intelligence Specialist | Netherlands
Mathijs is an Artificial Intelligence Specialist at the AI CoE of Sogeti Netherlands, responsible for implementing and leading machine/deep learning projects. His expertise lies in the exciting and ever-evolving world of generative models and synthetic data.

Leave a Reply

Your email address will not be published. Required fields are marked *

Slide to submit