The Entropy of Historical Impact
I recently saw “Oppenheimer” at a local theater. I spent three immersive hours learning about the Manhattan Project. There’s a theme in the movie relevant to IT today, much like in quantum physics. I hope discussing it here isn’t a spoiler.
In simplest terms, I’m talking about a kind of entropy, whereby once something has happened, then everything is changed permanently. After the Manhattan Project led to the bombing of Hiroshima and Nagasaki, there was no going back to a non-nuclear world. The cat was out of the bag. The stable was empty, and the horse had bolted. The can, it seems, is full of worms.
AI’s Resemblance to Historical Inaccuracies
And as per the theme of this, we are facing a similar situation within computing at the moment, primarily but not solely within artificial intelligence. Despite AI not actually being a new discipline, it has come of age in recent years with increased computing power and supporting developments fueling significant advancements. Computer-generated facsimiles can bring actors back to life, but for now, we can still discern them from the real thing. But Deep Fake technology has produced convincing footage of people saying things they simply never said, and probably never even thought.
The Challenge of Truth in Modern Computing
It seems pertinent to share a quote from Winston Churchill at this point, and that is “A lie gets halfway around the world before the truth has a chance to get its pants on.” In a world of increasing media access, there should be a moral obligation to tell the truth in the media. The film “Pearl Harbour” was, I recall, fairly entertaining. And “U-571” told an intriguing tale from time of the birth of modern computing as the enigma machine was cracked by those working at Bletchley Park. And yet both contain historical inaccuracies, perhaps unsurprisingly. Pearl Harbour has issues with the Japanese aircraft carriers and planes, and the dogfighting, for starters. U-571 was still in service at the end of WWII, and it was a British crew that captured the code book alluded to in the film. But increasingly people are learning history from Hollywood.
This is all very interesting, hopefully, but what’s it got to do with modern computing? Quite a lot, actually. ChatGPT has exploded onto the scene, and it’s the latest cool toy to play with. Only, what if the answers it gives you are all rubbish? How do you tell? We recently asked ChatGPT to write a profile of a colleague based purely on their LinkedIn profile. This colleague reportedly can’t abide West Ham and works in the USA. Two “facts” we know to be categorically untrue.
Testing AI: The Quest for Accuracy
But what if we didn’t happen to know our colleague supported West Ham and has never worked in the USA? How would we be able to tell? And it’s not simply AI or ChatGPT at fault. People have been known to update Wikipedia entries about themselves to appear more favourable, without clever algorithms to help them.
All of this leaves us with a new problem. How do we test AI? When we created profiles of our colleague, we encountered a different answer each time we did this. Which is correct? For AI to be genuinely useful to us we have to be confident that it’s telling us the right answer, that we can put faith in the output.
This is a tough nut to crack and there is as-of-yet no simple answer. But rest-assured, we’re working on it.
 Yeah, there’s more to this story and it pre-dates the work at Bletchley Park…