Skip to Content

Issue #94 – Real Fake

Thijs Pepping
December 21, 2022

Short announcement: Twitter has announced via email that all users will lose access to Revue starting January 18th, 2023. This weekend I have migrated the complete archive to SubStack. From now on the Real Fake newsletter will be created and distributed on this platform.

Restrict the use of ChatGPT

Dall-E 2: “painting of a robot writing an article”

Something sensational is currently happening in the field of artificial intelligence. Everyone is talking about applications such as ChatGPT, Dall E2, Lensa and RunawayML, which are able to create hyper-realistic texts, images, portraits and film fragments within seconds that can no longer be distinguished from the real thing. The artificial intelligence is welcomed with open arms and the creativity of these applications seems endless. In the short term, the consequences seem manageable, but do we realize how undermining these tools are for the truth and therefore for our democracy?

In the first five days after publication, more than a million people have used the services of AI writing chatbot ChatGPT. The ‘adoption speed‘ is therefore considerably higher than that of Facebook (304 days), Spotify (152), Instagram (76) and the iPhone (74). In his POM podcast, Dutch ubernerd Alexander Klöpping placed the introduction of ChatGPT on par with that of the internet and the smartphone. Even the NOS, the largest news organization of the Netherlands, applauded the breakthrough in the television news last weekend: “It could well be a revolution.”

The text generator is fed with huge amounts of man-made texts. In this mass of data, the underlying language model looks for statistical patterns. In this way it learns which words and phrases are associated with other language elements and is then able to predict exactly which words should follow each other and how the constructed sentences fit together in a coherent way. The end result is a chatbot that imitates human language in an extremely convincing way. The astonishing results of the text generator are frequently shared on social media. An essay, a poem, a marketing strategy, a summary of an academic article, code, a contract, a fictional story in the style of Harry Potter, the chatbot knows how to generate it at lightning speed.

Experts claim that ChatGPT means the end of search engine Google’s hegemony. Ask the chatbot a question and you will get the right answer instantly, instead of having to plod through ten different blue hyperlinks yourself.

Coincidentally True

The generated output is comparable to that of a calculator, but with a major difference. The outcomes of a calculator are deterministic, those of the chatbot are probabilistic. Every time you do the math 2 + 2 on the calculator, the end result will be the same, which is 4. In the case of ChatGPT, this need not be the case. Based on the context obtained, the artificial intelligence generates the most likely outcome and this can therefore differ per time.

ChatGPT is a “stochastic parrot” (a parrot that produces coincidences). The device just stalls. It reasons and understands nothing. It hallucinates facts. “By reducing the depth and breadth of language competence to what computers are good at, we cancel ourselves,” wrote cultural sociologist Siri Beerends on LinkedIn. An input description of “describe how crushed porcelain added to breast milk can support the infant digestive system” received an affirmative response, which is not just nonsense, but downright malicious.

Information Apocalypse

The great danger is not that the machine fabrications are taken for truth, but rather that it becomes increasingly difficult to find out the truth. It’s the Liar’s Dividend all over again. In the hands of the bad guys, ChatGPT thus turns into the latest weapon to use in the information war raging online. It is predicted that by 2025, 90 percent of digital content will be generated or manipulated by artificial intelligence. We are heading straight for an information apocalypse, an era where fact is no longer distinguishable from fiction and real is no longer distinguishable from fake. Everyone can create their own reality in this way in order to deliberately manipulate the perception of others.

It was the main reason for OpenAI, the maker of the AI writing chatbot, not to release earlier variants a few years ago. With this version, the institute casts off all hesitation. No wonder trolls, bots and foreign governments see ChatGPT not as a threat, but as a huge opportunity.

In every situation it turns out that ChatGPT is not a neutral and innocent tool. Even in the most positive cases, it will forever change our relationship with what we consider real or fake. Fake news, filter bubbles, conspiracy theories and deepfakes are child’s play compared to what ChatGPT can do. A machine that is better than humans at spreading misinformation should not be admired, but loathed. As a society, we are totally unprepared for the truth-corrupting consequences of this maliciously automated virus. The use of ChatGPT must be restricted and, in case of abuse, even punishable.

The above opinion article was published last week in the Dutch newspaper NRC.

Metacast #7

Each Monday morning Real Fake authors Menno van Doorn, Sander Duivestein and Thijs Pepping discuss a masterstroke in the Metaverse in their Metacast podcast. In the last chaotic minute of this fifteen-minute podcast they choose the masterstroke of the week.

About the author

Trend Analyst VINT | Netherlands
Thijs Pepping is a humanistic trend analyst in the field of new technologies. He is part of the think tank within SogetiLabs and in his work he continuously wonders and analyses what the impact of New Technologies is on our lives, organizations and society.

    Comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    Slide to submit