According to a new tool launched by start-up LatticeFlow, the most prominent artificial intelligence models may need to meet European regulations on cybersecurity resilience and discriminatory output. The firm’s free AI Act checker lets users assess whether their models comply with the EU’s pioneering law to mitigate risks, ensure citizen protection, and encourage innovation across Europe.
In its latest version, the AI Act defines “general-purpose” (GPAI) models as a series of high-level technical requirements that must be met for a model to circulate in the EU. However, it leaves the “how” of meeting those requirements to industry and regulators to standardize and operationalize through internal practices. Experts say that may lead to differing implementations that could undermine the effectiveness of the legislation or push businesses and talents to more lenient regulatory environments.
This approach—one of several favored by the EU during the tense political negotiation process in autumn 2023—also potentially favors companies that release their models under a free and open-source license, such as Mistral AI. The company’s former French deputy prime minister cofounder was at the forefront of the tense negotiations with the EU over its new rules for the AI industry.
The new test, which also examines whether an AI is susceptible to biases such as ageism, sexism, or racism, is designed to help developers and other stakeholders ensure their models meet the strict requirements of the law and encourage more people to report potential violations of the legislation.
While it’s too soon to say how effective the AI Act will be, some experts believe its unique structure – at the intersection of technical product safety laws and legislation intended to protect fundamental rights — will prove helpful. They say it is the first law of its kind, and its structure could set a precedent for the future.
But there are still challenges ahead. The EU must develop its administrative and market surveillance capabilities while ensuring that the new bodies it sets up at the member-state level are adequately staffed and integrated. The legislation also requires a solid foundation to support its implementation: it needs clear and concise definitions, robust documentation for each system, and easy-to-use tools for testing and monitoring compliance and reporting deviations.
Despite these obstacles, the EU’s AI Act is an important step to bring transparency and security to its AI system development process and protect citizens’ data. If implemented correctly, the act will help shape an industry that will benefit the entire economy.
This is the third of three articles in a series about how the European Union’s new rules for AI will impact its business leaders and entrepreneurs and how those changes will play out in practice. Read the other two articles in the series here and here.
The author is a Stanford HAI senior research fellow and international policy director at Stanford Cyber Policy Center. She has written extensively on the legal and ethical issues raised by the development and application of AI.