Artificial intelligence has exploded upon the world in the form of the generative AI chatbot known as ChatGPT.
Only five days after its launch to the general public it had garnered one million users, far outpacing the update–at least by that metric–of any other program or social media system introduced since the dawn of the Internet.
And that amazing pace of uptake has not slowed. By the end of the second month after its introduction, it had shot up to 100 million users.
Others have spent much blog space on examining the why and how of the generative AI revolution that seems to be taking place. And much of that narrative extols the transcendent possibilities of the future of humanity in partnership with this new form of machine intelligence.
I want to take a somewhat different view here–one that is more admonitory and intentional in nature.
As is the case for any new technology we, as IT professionals, will be one of the cheerleading groups for generative AI use more widely in society–though it appears that little help is needed there.
It is also incumbent on us to serve the role of technology guardian on behalf of the society we inhabit. Most users of this new technology will not have the in-depth knowledge we have about the shortcomings of this new technology, and so cannot make fully informed judgements about its safe and proper use.
Some technology experts have warned of apocalyptic and even existential crises attendant upon the widespread use of ChatGPT and similar technologies. This is well and good–we need adverse voices to make us aware of potential problems to society.
I want to point out another pitfall that appears to await us as we rush to the use of generative AI: the fact that, in one way, generative AI seems to mimic humans all to well.
We, and they, are able to lie with sincerity and authenticity.
If we treated AI with the same sense of skepticism with which we treat other humans–who we are aware harbor the same darker impulses that we are capable of–this would not be a major issue.
But, interacting with AI, we seem to be more willing to suspend this skeptical viewpoint. This seems natural as we do not have the same belief in machines failing, and there are few non-verbal clues we can rely on to determine veracity.
This is made worse by the fact that ChatGPT’s goal is to mimic human behavior and language, and can do so with astonishing ease and rapidity.
So, we are led to consider a new “threat” from ChatGPT: that it can appear to provide definitive and truthful answers that can be taken at face value. And in some cases, those deceptive answers can do great harm.
One such case is where ChatGPT invented a sexual harassment scandal where none actually existed. And there are others.
Does this mean that we need to call an immediate halt to the widespread use of ChatGPT as some groups have already done? For instance, Italy has already banned the use of ChatGPT. Legislation has been introduced in the US Congress to regulate its use (interestingly, the legislation itself was written by ChatGPT).
I think banning or severely restricting may be a step too far. Pausing may be a better step to take as we grapple with the downsides of this new technology.
Even that, however, may be seen as too much.
I would like to suggest another alternative: that we use our unique position as IT leaders and thinkers to cultivate in our clients, our friends, and ourselves a healthy sense of skepticism about the trustworthiness of this new tool.
Much like most of us already do with social media, we need to critically examine the claims that generative AI makes when interact with it. ChatGPT are the like are only as good as the people who train it and the material chosen for that training.
ChatGPI is not an infallible Oracle of Delphi. It’s a tool, trained by humans to interact with humans in a “human” manner.
With all the good and bad that implies.