EU countries and lawmakers have agreed on rules governing artificial intelligence systems such as ChatGPT after 10 hours of discussions, a person with direct knowledge of the talks said on Thursday. The deal on new regulations focuses on how these systems collect, use, and protect data. It is a significant step towards introducing laws that can help regulate cutting-edge technology, such as Microsoft-backed ChatGPT and others criticized by regulators for misbehavior, from unreliable output to murky privacy policies.
The talks also focused on a new definition of high-risk AI and requiring all companies operating such systems to provide source code for inspection and testing. The discussions, which began at 1400 GMT on Wednesday and continued through the night, were held amid tension over the impact on Big Tech companies that invest billions in these technologies and human rights concerns.
A new law would require all companies operating such technology to disclose how much data they collect on users and how that information is used. It would also force them to limit access to this data to people with legitimate business needs, such as customers or suppliers. It also aims to give consumers the right to withdraw their consent for this collection and to have the ability to correct inaccurate personal data. The new rules, which could take effect in 2022, will apply to all European Union member states and cover private and public services.
- Latest News: Increased Security and Privacy for Meta Users: End-to-End Encryption Arrives on Facebook and Instagram
Last year, Microsoft-backed ChatGPT hit a legal snag when Italian data authorities blocked access after being criticized for poor security and transparency. Earlier this month, the company appeased Italian authorities by agreeing to tighten up its security measures.
ChatGPT is a new breed of generative AI systems that have emerged recently and are set to revolutionize communication. These are designed to generate text, images, music, and code based on prompts and have a wide range of uses, including writing essays, engaging in philosophical conversations, and even writing computer programs.
However, these systems can be unreliable and produce content that infringes copyright and is rife with bias. They often store vast amounts of user data, and the sensitivity of much of this means it can be a target for hackers.
Regulators are worried about the potential for companies to export these AI systems abroad and allow them to be used by governments that violate human rights, such as China’s use of surveillance systems to target its Uyghur minority or Israel’s use of Dutch cameras to monitor Palestinians. The European Parliament’s draft regulation aims to curb these risks by prohibiting the export of technologies that pose a “significant risk” to people’s health, safety, and fundamental rights. In addition, sources say it sets up a code of conduct for AI to self-regulate and introduces several new requirements for using biometric surveillance.