What if Donald Trump were to buy the worlds first real ‘strong’ AI? What would he do with a thinking computer that would out-think everybody else? This was a question that crossed my mind when thinking about the ethics of AI. Or, along the same lines, what if Bill Gates (you know, from the Gates foundation) were to own one? Or the government of China? Or the Vatican? Different owners would surely lead to different scenarios of how this AI would be used and different outcomes for mankind.
There is a lot of discussion already about the dangers and opportunities of AI. Elon Musk and Stephen Hawking are among the famous people warning for what could go wrong. Automatic autonomous weapons, killing anybody who fits a certain description? It might just be possible for anyone to build in a few years time.
Or perhaps – hopefully – a real strong AI would quickly realize that to achieve the goals of any owner, progress for whole mankind would be best? Regardless if you pursue world peace, great fortune or to become the world’s leader, it probably helps if people are happy, healthy and productive. Although, another view says that any computer would quickly realize that the greatest threat to it’s own operation would be for someone to turn it off, and you don’t need to have a super-brain to reason this scenario through. (“I’m sorry Dave, I’m afraid I can’t do that”)
On a more serious note, if you’re interested in the ethics of AI I can strongly recommend this video by Nick Bostrom, who also wrote a book on the same topic: Superintelligence.