The CEO of ChatGPT maker OpenAI said on Monday that it was possible to get regulation wrong but should be respected amid global concerns about rapid advances in artificial intelligence, or AI. Sam Altman, whose start-up is behind one of the most potent publicly available AI systems, GPT-4, also called for guardrails that would stop companies from developing AI systems with the potential to harm. He spoke at the opening of a five-week global tour in Toronto, Canada, which has him visiting developers and users of the company’s AI tools, including its popular chatbot ChatGPT.
At a Senate subcommittee hearing on Tuesday, the tech billionaire argued that regulation was necessary for the development of AI because it could threaten humanity’s existence. He joined tech leaders like Meta CEO Mark Zuckerberg and Amazon co-founder Jeff Bezos in telling lawmakers there was a need for action on the issue.
He compared the need for rules to govern AI to the need to control nuclear power. “There are some real dangers to generative AI that can be misused by dictators or people with bad intentions,” he said. “We need to have guardrails in place to ensure it can be used for good.”
In a conversation with the hosts of a podcast on the tech website Axios, he also stressed the importance of ensuring that those who supply AI technology do not use it to invade people’s privacy or manipulate their minds. He pointed to the need for a new international body similar to the International Atomic Energy Agency, which oversees nuclear power to prevent the proliferation of weapons-grade material.
Many countries are planning AI regulation, with Britain hosting a global AI safety summit in November at Bletchley Park in Buckinghamshire, the site of British Enigma codebreaking. The event is intended to rally leading AI nations to agree on rapid, targeted measures for improving safety in global AI use.
Altman, a board member of Tesla and SpaceX co-founder Elon Musk’s space exploration firm, Virgin Hyperloop One, highlighted the need for consistent government regulatory approaches across borders. He said that this would increase the odds of punishment for those who break laws and help to ensure that disciplinary efforts are directed to areas that affect all of society.
He cited the example of the EU AI Act, which requires transparency from companies and other safeguards in the field. The UK government is also working on its bill, which it hopes to pass by the end of this year. Altman also discussed the EU’s plan to create an AI regulator and urged that it be “balanced between European and US traditions.” He added that it is vital for a regulator to focus on “the subtle details here that matter.”