On Sunday, the United States, Britain, and more than a dozen other countries unveiled what a senior U.S. official described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors. The agreement stresses the need for companies to create A.I. systems that are “secure by design,” with security measures built into systems from the start rather than added after the fact, as is often the case today.
The 20-page document outlines general, nonbinding recommendations, including monitoring A.I. systems for abuse, protecting data from tampering, and vetting software suppliers. It also calls for companies to educate staff on cybersecurity risks and warns against using publicly available open-source software without adequately vetting it. The document was co-authored by the National Cyber Security Centre, the U.K.’s equivalent of the U.S. Cybersecurity and Infrastructure Security Agency, and a group of 18 other countries, including Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria and Singapore.
While the guidelines are nonbinding, they are significant in establishing global standards for responsible A.I. development. It is the first time so many countries have agreed that security should be a top priority in designing A.I. systems, according to CISA Director Jen Easterly. We believe the new guidelines provide an essential basis for international cooperation, reflecting the growing recognition that A.I. security is a shared responsibility.
The world increasingly relies on A.I. technologies to drive business and improve people’s lives. For example, health services in the U.K. use a SMART Box system to send chest X-rays and C.T. scans to central servers, which then generate diagnoses and prescriptions. Businesses rely on AI-optimized supply chains and autonomous robots to sort warehouse inventory. But the technology also holds the potential to enable mass surveillance, fuel social control, and cause economic instability, among other threats.
This is why the international community is stepping up its efforts to address A.I. vulnerabilities. In October, the White House issued an executive order to mitigate such risks for consumers, workers, minorities, and national security. Europe, which has long led in regulating the technology, is working on a new law to govern its use.
The new agreement and other initiatives highlight how important it is for governments to set clear standards for A.I., not simply because of the risk of harm but because of the potential for good. Without strong guidelines, AI could be used to manipulate elections, spread misinformation, disrupt financial systems, and erode trust in the global economy. It could also become a tool of censorship or a platform for sexism, racism, and other forms of discrimination. The guidelines will help ensure that these risks are minimized and that the transformative benefits of A.I. are broadly accessible to all. The U.S. government will continue to work with our partners worldwide to shape the future of the global digital economy.