In a move that surprised many in the AI industry, Ilya Sutskever, former chief scientist and co-founder of OpenAI, announced the launch of his artificial intelligence (AI) company just a month after leaving his previous position. The new venture, aptly named Safe Superintelligence, underscores Sutskever’s unwavering focus on building safe and beneficial AI.
Sutskever’s departure from OpenAI was shrouded in controversy. He reportedly led an attempt to oust OpenAI’s CEO, Sam Altman, and, upon its failure, stepped down from the company’s board before ultimately leaving altogether. However, his vision for the future of AI remains clear: prioritize safety above all else.
Safe Superintelligence isn’t shy about its mission. The name itself is a declaration of intent. The company website is currently bare-bones, but Sutskever’s message on the microblogging platform X (formerly Twitter) leaves little room for doubt: “Building safe superintelligence (SSI) is the most important technical problem of our time,” he wrote. “Safe Superintelligence is our mission, name, and entire product roadmap.”
This laser focus on safety is a point of departure from OpenAI, which has taken a more pragmatic approach, seeking partnerships with organizations like Microsoft to ensure responsible development and deployment of AI technologies. While OpenAI prioritizes safety, it also explores the potential applications of AI in various fields. On the other hand, Safe Superintelligence seems intent on laying the groundwork for an ultra-intelligent AI that poses minimal existential threat.
Sutskever isn’t venturing into this ambitious project alone. He’s joined by a team of accomplished AI researchers. Daniel Levy, a former OpenAI researcher, and Daniel Gross, who previously led Apple’s AI efforts, are co-founders at Safe Superintelligence. The company also boasts offices in California and Tel Aviv, Israel, suggesting a global recruitment strategy.
The question remains: how will Safe Superintelligence achieve its lofty goal? Specific details about the company’s technical approach are scarce at this point. However, experts speculate that the team might delve into areas like formal verification, a technique used in software engineering to guarantee an AI system behaves as intended. Additionally, research on value alignment, ensuring an AI’s goals are compatible with human values, could be a central focus.
The news of Safe Superintelligence’s launch has sparked a wave of reactions within the AI community. Some hail it as a necessary step towards ensuring the safe development of Superintelligence. Others express skepticism, questioning the feasibility of achieving such a monumental task. Regardless of the initial reactions, Safe Superintelligence has undoubtedly thrown down the gauntlet, prompting a crucial conversation about the future of AI and the potential risks associated with Superintelligence.
Sutskever’s bold move has undeniably shaken things up in the AI landscape. Whether Safe Superintelligence succeeds in its ambitious mission or not, it has forced the industry to confront the critical issue of AI safety head-on. As the race towards artificial general intelligence intensifies, companies like Safe Superintelligence serve as a vital reminder that the ultimate goal isn’t just to create the most powerful AI but the safest and most beneficial one for humanity.