OUR PUBLICATIONS

How Will The European Union Define “Artificial Intellegence”?


26 May 2023 - Av. Yaren Denizaltı

How Will The European Union Define “Artificial Intellegence”?

Recent Developments on the European Union Artificial Intelligence Regulation ("Artificial Intelligence Act")

The European Union (“EU” or “Union”) continues its legislative efforts unabated, aiming to regulate and harmonize the rapidly developing field of artificial intelligence, which has consequences within many branches of law. It will be inevitable that this uniform application will also shape the developing international artificial intelligence legislation. On April 21, 2021, the European Commission took the first step in this regard by presenting the Draft Regulation of the European Parliament and of the Council[1] (hereinafter referred to as the “Draft Regulation”). At the EU level, the Draft Regulation was adopted with some amendments by the relevant committees of the European Parliament with the definition of artificial intelligence also being established at this stage.

Before addressing the details of the definition, it will be useful to briefly understand the legal text, its features and importance:

  1. What Does the Artificial Intelligence Act Mean within the EU Legislation?

The Draft Regulation is based on establishing a uniform legal framework for the development, marketing, and use of artificial intelligence in accordance with EU values. In addition, the EU aims to be the global leader in developing human-centered, sustainable, safe, ethical and reliable artificial intelligence.

In order to serve these objectives, starting at the union level, a uniform and high-level protection across the EU is required. Hence, the relevant regulation is regulated as a Regulation of the European Parliament and of the Council (“Regulation”) from the sources of EU law. The basis for this is explained in the Draft Regulation as the direct applicability of the same legal text in all Member States[2] and serving the functioning of the EU internal market (“internal market”). As a matter of fact, regulations in the domestic laws of different Member States may lead to differentiation of practice in the internal market and reduce legal certainty for companies developing, using or operating artificial intelligence systems[3].

In its Draft Regulation proposal, the European Commission aims to (i) ensure that artificial intelligence systems placed on and used in the Union market are safe and respect existing laws on fundamental rights and Union values; (ii) provide legal certainty to facilitate investment and innovation in the field of artificial intelligence; (iii) improve the governance and effective enforcement of existing law on fundamental rights and security requirements applicable to artificial intelligence systems; (iv) facilitate the development of a single market for legal, secure and reliable artificial intelligence applications and prevent market fragmentation. Governments and companies using artificial intelligence tools will have different obligations depending on the level of risk. These risk categories are: Unacceptable Risk, High Risk, Low Risk and Minimal Risk.

  1. Defining Artificial Intelligence in the Draft Regulation and Details of the Human-Centered Structure

Article 3 of the Draft Artificial Intelligence Regulation titled "Definitions" defines an artificial intelligence system (“AI-system”) as follows:

“(a) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”

The techniques and approaches specified in Annex I are: “(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods.”

What is noteworthy here is the definition that puts human beings at the center. This is because in the EU Data Protection Regulation (“GDPR”), automated decision-making was also seen as a risky aspect of artificial intelligence[4]. Thus, it was not possible to make important legal decisions with automated decision-making mechanisms independent of human supervision. In order to eliminate these risks, the human element is not excluded from the artificial intelligence equation, but artificial intelligence is not necessarily hindered either.

  1. What Does the Adoption of the Definition of Artificial Intelligence by the European Parliament Committees on May 11, 2023, Reflect? What are the Additional Regulations envisaged by the European Parliament?

In their amendments to the European Commission's proposal, Members of the European Parliament emphasized the need for a uniform definition of artificial intelligence designed to be technologically independent so that it can be applied to the artificial intelligence systems of today and tomorrow, aiming to ensure that artificial intelligence systems are supervised by humans, safe, transparent, traceable, non-discriminatory, and environmentally friendly.

Members of the European Parliament voted on the 11th of May 2023 to adopt the following amendments[5] to ensure the human-centered and ethical development of artificial intelligence, in addition to the regulations referred to in the Commission's Proposal for a Regulation:

 

Prohibit the use of facial recognition in public spaces and for biometric inspection, emotion recognition, and predictive police and security applications

 

Taking a risk-based approach to AI, the Draft Regulation significantly amended the list of prohibited AI systems to include unauthorized and discriminatory uses, with the following additions:

• “real-time” remote biometric identification systems in public places;

• "ex-post" remote biometric identification systems, with the exception of law enforcement only for the prosecution of serious crimes and only after judicial authorization;

• Biometric classification systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion, political orientation);

• Predictive policing systems (profiling, based on location or past criminal behavior);

• Emotion recognition systems in law enforcement, border management, workplaces, and educational institutions; and

• Indiscriminate collection of biometric data from social media or CCTV footage to create facial recognition databases, in violation of human rights and the right to privacy.

 

 

Introducing new transparency measures for productive artificial intelligence applications such as OpenAI/ChatGPT

 

Generative artificial intelligence applications will need to assess and mitigate risks, comply with design, information, and environmental requirements, and register with the EU database.

Generative baseline models such as GPT would need to comply with additional transparency requirements, such as disclosing that content is generated by AI, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for education.

 

Adding new risk management rules on filing complaints about artificial intelligence systems

 

The European Parliament wants to improve the right of citizens to lodge complaints about artificial intelligence systems and to receive explanations of decisions based on high-risk AI systems that significantly affect their rights. Members of the European Parliament have expanded the classification of high-risk areas to include harm to people's health, safety, fundamental rights, or the environment.

 

They also added artificial intelligence systems for influencing voters in political campaigns and recommendation systems used by social media platforms (with more than 45 million users under the Digital Services Act) to the high-risk list.

 

 

Alongside these fundamental changes, in order to promote innovation and protect citizens' rights, Members of the European Parliament:

  • Added exemptions to these rules for research activities and artificial intelligence components provided under open-source licenses; and
  • Made changes to the role of the EU Artificial Intelligence Office, which will be tasked with monitoring how the artificial intelligence rulebook is implemented.

The Draft Regulation will first be presented to the plenary of the European Parliament in June 2023 for the discussion of this draft to be approved by the entire European Parliament. Subsequently, trilogue negotiations will take place involving representatives from the European Parliament, the Council of the European Union, and the European Commission, with the aim of reaching an agreement on the final terms.

  1. What To Expect

For countries outside the EU, the Draft Regulation also significantly strengthens its role in helping to shape global norms and standards as well as promoting trustworthy artificial intelligence consistent with Union values and interests. It provides the EU with a strong basis for further engagement with external partners, including third countries, on AI-related issues and in the international fora. Article 2 of the Draft Regulation provides that the Regulation includes “(a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country” and “(c) providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union”. Consequently, it is inevitable that providers of artificial intelligence software that cannot refrain from entering the EU internal market, which is one of the largest markets, must also comply with these rules.

In addition, the fact that the EU has chosen a definition[6] parallel to that of the Organization for Economic Cooperation and Development (OECD), of which Turkey is a member, is an indication that uniformity in international artificial intelligence legislation is a goal and an inevitable result.

 

SOURCES

  • Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts [2021] COM (2021) 206
  • European Parliament Legislative Observatory, Artificial Intelligence Act, 21/04/2023, < https://oeil.secure.europarl.europa.eu/oeil/popups/summary.do?id=1658804&t=e&l=en> Access Date: 18.05.2023
  • Organisation for Economic Co-Operation and Development (OECD), Recommendation of the Council on Artificial Intelligence [2019] OECD/LEGAL/0449

 

 

 

[1] Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts

[2] The Treaty on the Functioning of the European Union (TFEU), m. 288

[3] 2021/0106 (COD), Explanatory Memorandum, para 2.4.

[4] See. The European Commission, Can I be subject to automated individual decision-making, including profiling? <https://commission.europa.eu/law/law-topic/data-protection/reform/rights-citizens/my-rights/can-i-be-subject-automated-individual-decision-making-including-profiling_en> Access Date: 17.05.2023; GDPR m. 22

[5] See. The European Parliament, AI Act: a step closer to the first rules on Artificial Intelligence, <https://www.europarl.europa.eu/news/en/press-room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial-intelligence> Access Date: 18.05.2023

[6] OECD/LEGAL/0449, para I: “An AI system is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.