The recent, unexpected resignation of CEO Sam Altman from OpenAI, followed by his joining Microsoft, offers a fascinating glimpse into the modern version of an age-old struggle: that between innovation and the fear of its unknown consequences. OpenAI, established in 2015 as a nonprofit organization with the mission to create artificial general intelligence (AGI) for the benefit of all humanity, transformed in 2019 into a model where profit generation was possible, but under the strict governance of a nonprofit board.
This structure led to an internal power struggle between two ideological camps within the company, labeled as ‘tribes’ by Altman. On one side were the techno-optimists, driven by commercial progress and innovation, while on the other were advocates of caution and ethical considerations, concerned about the potential existential threats AI could pose to humanity. This tension reached a peak with the development and release of ChatGPT, later GPT-4 and GPT-4 Turbo, which were both commercially successful and sources of concern for potential unintended consequences. The pressure to commercialize products clashed with the company’s mission, creating an unsustainable division within its leadership.
This struggle within OpenAI reflects a deeper philosophical question that has long preoccupied humanity: how do we handle our own creations, especially when they have the potential to surpass or even threaten us? History shows that every major technological advancement brings immense benefits as well as significant risks. Technology is neither good nor bad, but it is also not neutral.
What becomes clear from the developments at OpenAI is that the future of AI – and possibly the future of humanity – depends on finding a balance between these two forces. The challenge is not just technological, but also ethical and philosophical. In our rush to push the boundaries of what is possible, we must not forget to ask: should we do everything that we can do?