On Friday, Europe reached a provisional deal on landmark European Union rules governing the use of artificial intelligence, including governments’ use of AI in biometric surveillance. The political agreement is the first step toward making the EU the first significant world power to enact laws regulating AI. But it still needs to be approved by the 27 member countries, and final technical details remain unclear, the New York Times reported.
The new law aims to ensure that AI systems are trustworthy, promoting innovation while safeguarding against possible harm. The draft AI Act would limit how law enforcement, government, and service companies like water or energy providers can use the technology. It would also set transparency requirements for large AI companies such as Google, Microsoft, and OpenAI and require images made with AI to be labeled as such. It also would bar the indiscriminate scraping of facial images from social media and CCTV footage for facial recognition databases and ban cognitive behavioral manipulation, where AI is used to change people’s attitudes or behavior.
Negotiations on the law spanned several years and involved intense lobbying by tech companies of all sizes and by human rights groups. Some of the most vibrant debates over the bill focused on how to regulate governments’ use of AI for surveillance and policing. European digital privacy and civil liberties groups pushed lawmakers to oppose broad carve-outs for national security and policing.
Other issues were defining what constitutes an AI system and how much regulation should apply to different types of AI. The final version of the legislation includes a definition that sets precise criteria for what is considered an AI and allows for a tiered approach to regulation that includes clear obligations for AI deployers with high-risk functionality to conduct a fundamental rights impact assessment. The legislation also makes it easier to challenge decisions based on AI that could be harmful.
The provisional deal was reached after nearly 15 hours of negotiations between EU countries and European Parliament members. The EU’s top negotiator, Dragos Tudorache, said the agreement “will ensure that we don’t become a global dumping ground for unsafe and untrustworthy algorithms.”
Among the specific details to be worked out in the next phase of the legislative process are how to regulate live biometrics, such as video surveillance of public spaces, and how to protect people from malicious AI that tries to steal their identities or manipulate their behavior. The agreement must be formally voted on and approved by the 27 member countries before it becomes law.
Tech industry executives welcomed the political deal but remained cautious about the legal challenges that will follow. They are also concerned that the provisions don’t go far enough to protect people from the risks of AI. They want to see more detail on how the law will be enforced and fined and how it will impact generative AI systems such as ChatGPT, which can produce wildly diverse text, photos, and music.