On Sunday, the United States, Britain, and more than a dozen other countries unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors. The 18 countries agreed that companies designing and using AI need to develop and deploy it in a way that keeps customers and the public protected from misuse. The nonbinding document consists of 20 pages that outline guidelines and general recommendations. It calls for monitoring AI systems to prevent abuse, safeguarding data from manipulation, and thoroughly vetting software providers. It also stresses that AI developers should make security a central part of the design process rather than just an afterthought.
The agreement is aimed mainly at providers of AI systems hosted by their organizations or with external application programming interfaces. The goal is to help them ensure that cybersecurity is a critical requirement in the early stages of development and is embedded throughout the life cycle of an AI system. It also encourages governments to collaborate with companies that develop or use AI to design national systems.
While the agreement is a significant step forward, it only addresses some of the more complicated issues surrounding AI’s ethical use and data collection that fuels these systems. These questions are fundamental as AI becomes more widely used in areas with profound implications, such as democracy, economic stability, and job security.
There are growing concerns that unchecked advancements in AI could create superintelligent digital “minds” that no one, including their creators, can understand or reliably control. Such a development would raise fundamental privacy and security concerns, potentially leading to mass surveillance, terrorism, warfare, and other dangerous outcomes. The dangers are reminiscent of the Cold War when nuclear weapons were poised to be deployed several times. Still, each time, a single veto by Soviet commander Vasily Arkhipov saved the world from disaster.
- Against this backdrop, Europe has taken the lead in promoting responsible AI safety and security. The United States has pushed Congress to draft laws that require companies to test their AI systems for vulnerabilities but has yet to make much headway. In the meantime, the White House is partnering with private companies to launch a forum to share best practices.
Despite the broad international support for the agreement, the specifics of its implementation will vary widely between nations. Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency in the United States, said the unification of the nations under this principle signals a pivotal shift in focus. Previously, she said, there had been an “insatiable rush to market” that often prioritized features, speed, and cost competitiveness over the security of AI. The new agreement will “send a clear message that we must prioritize security throughout the design and deployment of AI.” She added that the United States will continue working with allies and partners to create a robust international framework that reflects this priority.