Today’s launch our new report on machine intelligence “The Frankenstein Factor“. It’s an anatomy of fear of artificial intelligence. Oxford professor Nick Bostroms is positioned in the paper as the Chief of Fear. Although he doesn’t like to be associated that much with Hollywood fears of AI, he’s the spokesperson for doomsday scenario’s. In the end we’ll all be the slaves, the computers the masters (by the way, that already is the case looking at our smartphone behavior).
To be honest, I do like this kind of free format philosophy. I’m a science fiction fan. I’m a trend watcher. I like provocations. And I do understand that life on this planet is as much about entertainment as it is about survival. Hollywood story’s sell well. The scenario of Mary Shelley’s “Frankenstein or the Modern Prometheus” is seen in many movies. It goes like this. We create our machines, automata and androids, and in the end they turn against us. We see enlightenment and romanticism fighting with each other. Romanticism wins: we should not play for God. It is what we deserve when you go to far with technology. This cultural dimension is one of the things you’ll find in the report. It simply explains why we fear from a cultural perspective, not so much about the question whether we should fear the technology. It also is an explanation why the taming-technology attitude in Japan leads to less fear. We are obsessed by robots just because we separate science and soul, whereas the Shinto culture assumes these two things can not be separated at all.
Anyhow, what I learned, while going through all these arguments against and in favor of artificial intelligence, is that our minds are easily fooled. You know probably what I’m talking about. Even in the early 20th century, psychoanalysts Ernst Jentsch and Sigmund Freud explained what happens when we are confronted with androids and automata (living dolls at that time). We feel ‘unheimlich’, looking at something that is so known (heim) and so unknown (secret) at the same time. Freud was obsessed by the word heimlich. This fear it raises has nothing to do with any rationale prediction bout the future. Here and now we are confronted with our doppelgänger, the synthetic look-alikes we create, our own Frankensteins. It’s triggering the fear for ourselves, we’re looking in the mirror of our untransparant selves. Accepting we are not that special as we think we are, and all the other arguments on why we should not fear artificial intelligence, has been (brilliantly) explained by another Oxford professor, Luciano Floridi. Meanwhile my question is, looking at the photo of the boys:
Will computers ever be intelligent enough to understand how stupid people are?
That shouldn’t be to difficult, would it? Another book, the magnus opus of Nobel Prize winner Daniel Kahneman (Thinking Fast and Slow), is most convincing in showing that humans are extremely lousy decision makers. So as long that there’s no real superintelligence (it could take another 500 years says Bostrom) we could be focussing on superstupidity. It’s ubiquitous, it is real, it’s here today and we can do something about it now.