A Look at AI Legislation in Europe
Over 60 countries have put forth some form of AI public policies since 2017, thus indicating the growing concerns regarding the potential misuse of AI.[1] Let’s look at where the most significant activity is taking place for AI regulation, namely in Europe and the US. This blog post will focus on what Europe is doing with respect to AI regulations and laws, and my next one will look at the US.
Europe has been the leader in regulating privacy and other technology areas over the last decade, so it is not surprising that they are taking the lead in regulating AI.
Europe implemented the General Data Protection Regulation (GDPR) in 2018, representing the gold standard for privacy and data protection laws. Section 22 of GDPR does give data subjects — an “identifiable natural person” in the EU — the right not to be subject to a decision based solely on automated decision-making that significantly affects them. This also applies to data subjects being profiled. Profiling in the context of the GDPR means using personal data to evaluate and score a data subject, e.g., predicting their performance at work, health, personal preferences, etc.[2]
But Europe realizes that more is needed than simply giving its citizens the “right to object” to decision-making and profiling being done by AI. There is a growing concern regarding AI and its potential impact on the fundamental rights protected under the EU Charter of Fundamental Rights. Namely, Europe is concerned that AI may jeopardize the “right to non-discrimination, freedom of expression, human dignity, personal data protection, and privacy.” Furthermore, there is concern about the safety risks of AI-based products, both from a physical and mental well-being perspective.[3]
Europe started down the path of regulating AI by first creating working groups and publishing papers such as the “Ethics Guidelines for Trustworthy AI,” which I summarized in my last blog post. However, the EU’s focus has now shifted to enacting “horizontal legislation” — i.e., across many industries — that regulates the specific usage of AI systems and associated risks. In April 2021, the European Commission proposed the Artificial Intelligence Act (AI Act). This draft proposal is working through the European Union’s legislative process, which may take a few years before it gets implemented.
The new rules would apply to providers of AI systems based in the EU. It also applies to businesses outside the EU that provide AI-based products and services used inside the EU. Much like it did with GDPR, it is clear that the EU wants to continue to “export its values across the world” with AI by applying its rules to non-European businesses operating in Europe. And given the success of the GDPR in setting the standard for privacy laws, it is likely that the AI Act may continue the “Brussels effect” of setting global tech standards.[4]
The AI Act takes a risk-based approach to regulating AI, with different requirements and obligations based on the level of risk. At the top of the risk pyramid are AI systems presenting “unacceptable” risks that would be prohibited. Next, and below “unacceptable” in the pyramid, are “high-risk” AI systems that are authorized but would be heavily regulated and need to address most of the trustworthy AI requirements that I detailed in the prior section. Below “high-risk” is “limited risk” AI systems that need to meet just transparency obligations. Finally, at the bottom of the pyramid are “low and minimal risk” AI systems that will have no obligations attached to their deployment and use.[5]
“Unacceptable risk” AI systems are explicitly banned as they are considered a “clear threat” to public safety and human rights. There are four such AI systems named in this category: (1) AI systems that deploy “subliminal techniques” resulting in physical or psychological harm; (2) AI systems that do China-style “social scoring” by public authorities; (3) AI systems that exploit specific vulnerable groups such as children or the elderly that also result in physical or psychological harm; and (4) biometric systems that perform real-time identification by law enforcement in public spaces with specific carveouts including investigations related to missing children or terrorist incidents.[6]
“High-risk” AI systems create adverse impacts on fundamental rights or safety. The eight categories of high-risk AI systems that are identified as impacting fundamental rights include education and training, law enforcement, migration and border control, employment and worker management, and management of critical infrastructure. For safety, all AI systems that are used as a safety component of a product under EU safety laws (e.g., toys, aviation, cars, and medical devices) are also considered high-risk.[7] The European Commission estimates that 5 to 15% of all AI systems fall in the high-risk category.[8]
Providers of high-risk AI systems would have to register in an EU-wide database before being available in the EU market. They would then have to pass a conformity assessment that shows they meet the requirements for high-risk systems. The requirements include using high-quality training, validation, and testing data; establishing traceability and auditability; ensuring transparency to end-users; requiring human oversight (including providing a “kill switch” if the AI system poses a risk to fundamental rights and safety); and ensuring robustness, accuracy, and cybersecurity of the underlying AI system. Furthermore, providers of these high-risk systems will have to do post-market monitoring that evaluates compliance with these requirements.
“Limited risk” AI systems are those systems that interact with humans, such as chatbots or AI systems that manipulate audio, video, or image content (i.e., deepfakes). These AI systems have transparency obligations, including notifying humans that they are interacting with an AI system unless it is evident, informing humans that emotional recognition or biometric categorization systems are being applied to them, and applying labels to deepfakes.[9]
Finally, “low or minimal risk” AI systems are permitted with no restrictions. Examples of low-risk AI systems include spam filters and AI-enabled video games. However, the proposed AI Act envisages creating a code of conduct to regulate these products that apply the same requirements to high-risk AI systems.[10]
From a governance and enforcement perspective, the AI Act would establish a European Artificial Intelligence Board to oversee rulemaking for the AI Act. Non-compliance with the unacceptable risk prohibitions could lead to administrative fines of up to 30 million euros or 6% of annual worldwide sales. In comparison, non-compliance with the other provisions could lead to fines of up to 20 million euros or 4% of annual worldwide sales. Given the size of the penalties and the scope of the AI Act, larger US-based Big Tech companies will undoubtedly pay heed to the AI Act if it is implemented, much as the GDPR governs them.[11]
That being said, Big Tech does avoid significant regulation because AI-based systems in social media, search, online retailing, app stores, mobile apps, or mobile operating systems are not deemed high-risk. This means the providers of those AI systems only have to fulfill the transparency obligations that low-risk AI systems must meet. Furthermore, the conformity assessment is a self-assessment and, in effect, represents an internal check-off, meaning there is no audit report for the regulators or the public to review.[12]
But even with those potential weaknesses, the AI Act could be a major step forward in regulating AI and making it trustworthy. The AI Act would complement the recent passage of the Digital Services Act (DSA), which starting in 2024 will allow inspections by the European Commission of businesses’ AI systems that moderate content and facilitate targeted advertising. So, much as it did with GDPR, it is clear that the EU wants to continue with the AI Act and DSA to “export its values across the world” and continue the “Brussels effect” of setting global tech standards with AI.[13]
[1] Alex Engler, “The EU and US are starting to align on AI regulation,” The Brookings Institution, February 1, 2022, https://www.brookings.edu/blog/techtank/2022/02/01/the-eu-and-u-s-are-starting-to-align-on-ai-regulation/.
[2] Intersoft Consulting, “General Data Protection Regulation,” https://gdpr-info.eu/.
[3] Tambiama Madiega, “Artificial intelligence act,” European Parliamentary Research Service, January 2022, https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/698792/EPRS_BRI(2021)698792_EN.pdf.
[4] Eve Gaumond, “Artificial Intelligence Act: What is the European Approach to AI?”, Lawfare Blog, June 4, 2021, https://www.lawfareblog.com/artificial-intelligence-act-what-european-approach-ai.
[5] Tambiama Madiega, “Artificial intelligence act,” European Parliamentary Research Service, January 2022.
[6] Lucilla Sioli, “A European Strategy for Artificial Intelligence,” European Commission, April 23, 2021.
[7] Tambiama Madiega, “Artificial intelligence act,” European Parliamentary Research Service, January 2022.
[8] Melissa Heikkila, “A quick guide to the most important AI law you’ve never heard of,” MIT Technology Review, May 13, 2022, https://www.technologyreview.com/2022/05/13/1052223/guide-ai-act-europe/.
[9] Lucilla Sioli, “A European Strategy for Artificial Intelligence,” European Commission, April 23, 2021.
[10] Eve Gaumond, “Artificial Intelligence Act: What is the European Approach to AI?”, Lawfare Blog, June 4, 2021.
[11] Eve Gaumond, “Artificial Intelligence Act: What is the European Approach to AI?”, Lawfare Blog, June 4, 2021.
[12] Mark MacCarthy and Kenneth Propp, “Machines Learn that Brussels Writes the Rules: The EU’s New AI Regulation,” Lawfare Blog, April 28, 2021, https://www.lawfareblog.com/machines-learn-brussels-writes-rules-eus-new-ai-regulation.
[13] Eve Gaumond, “Artificial Intelligence Act: What is the European Approach to AI?”, Lawfare Blog, June 4, 2021, https://www.lawfareblog.com/artificial-intelligence-act-what-european-approach-ai.