Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity. The previously unreported letter and AI algorithm were key developments before the board pushed out Altman, who is seen as the poster child of generative AI, the two people familiar with the matter told Reuters. The development, dubbed Q*, is an AI algorithm that can perform math problems at a grade school level and could be a significant step towards developing AI that can outperform humans in most economically valuable tasks.
While the breakthrough is exciting, some researchers are concerned that it could be dangerous to human life if developed incorrectly. They want to be sure that the new technology is not being used for military applications or weaponized and are worried about its potential to become a “killer app.”
In an internal message sent by tech chief Mira Murati, employees were informed of a project called Q* and a letter to the board, one of the sources said. Murati also alerted them to media reports about the discovery but didn’t comment on their accuracy.
- Read more: Elon Musk Announces New Update For Twitter
The letter to the board was part of a more extensive list of grievances that led to the ouster of Altman, including concerns about commercializing advances before fully understanding their consequences. However, the two sources said that the Q* discovery and its implications were a primary motivating factor in the board’s decision.
Until his firing, Altman was the face of generative AI and a leading voice against using the technology for lethal purposes. His work with ChatGPT, an AI chatbot that sounded surprisingly like a human, catapulted OpenAI to the forefront of tech and put Altman on the world stage. He had raised billions of dollars and even sent Google scrambling.
But when the company’s board decided to fire him and replace him with former Twitch executive Greg Brockman, it set off a wave of employee revolt that threatened to derail a long-term strategy for achieving the full potential of AI.
The upheaval at OpenAI exemplified the growing tension between investors keen to see the fast commercialization of AI and scientists who believe it is a moral imperative to keep developing the technology with safeguards. In a post on X, founder Elon Musk praised Altman’s “deep commitment to the safety of human life” and his “unwavering belief that the only way to ensure this is to do it ourselves.” Reuters originally published this story.