Developments In Artificial Intelligence And Current Status Around The World
- Introduction
The use of artificial intelligence systems is rapidly increasing its upward momentum all over the world and is becoming widespread in many sectors, especially in education, health, industry, gaming, and entertainment sectors. With this increase in usage that has been going on for years, the need to prepare legal regulations in the field of artificial intelligence has arisen, and the footsteps of these regulations have begun to be heard over the past years. In fact, according to Stanford University's reports, the number of countries that include the concept of artificial intelligence in their laws stood at 25 in 2022, while this number plummeted to 127 in 2023[1].
This article analyzes the current approaches and regulations in the field of artificial intelligence in the European Union ("EU"), the United Kingdom ("UK"), the United States of America ("USA") and finally within Türkiye.
- The Approach of the European Union and the Current Status
The European Union has been at the forefront of legal work on artificial intelligence. As a matter of fact, the Artificial Intelligence Act[2] (the "AI Act"), the first comprehensive law in this field, was approved by the European Parliament on March 13, 2024. It is expected to enter into force gradually after its publication in the Official Journal of the European Union.
The European Union takes a holistic approach to AI regulation. Article 3 of the AI Act defines AI systems as “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. However, the AI Act adopts a risk-based approach and categorizes AI systems according to their anticipated risk levels. Thus, the fifth, sixth and fiftieth articles of the AI Act regulate "prohibited", "high-risk" and "subject to transparency obligations" AI systems, respectively.
The regulations in European law foresee legal obligations for General Purpose Artificial Intelligence models. General Purpose Artificial Intelligence models are not designed for a specific task but are a generalized type of intelligence that aims to imitate various aspects of human intelligence and can perform many different tasks in this regard. Due to its nature, the European Union prefers stricter regulations for this model of artificial intelligence.
In fact, certain providers of AI systems are subject to obligations such as testing before launch and monitoring after launch. Open-source AI systems are excluded from these obligations unless they fall within the groups defined by the AI Act as prohibited, high-risk or subject to transparency obligations.
EU Member States’ authorities aims to encourage innovation by encouraging the establishment of regulatory sandboxes and their real-world testing. Regulatory sandboxes are controlled environments, under the authority and oversight of state authorities, that allow stakeholders to develop and train innovative AI systems before releasing them into market.
As an EU regulation, this Act holds great importance as it is directly applicable in a total of 27 countries. It establishes a common supervisory and enforcement regime between the member states and the European Commission, granting the EU AI Office[3] within the European Commission exclusive powers over General Purpose Artificial Intelligence models, while member states will be able to supervise other artificial intelligence systems.
It can also be said that the sanctions foreseen by the AI Act are quite severe as well. Fines of up to €40 million, or 7% of annual worldwide turnover, are foreseen for non-compliance.
- The Approach of the United Kingdom and the Current Status
On February 6, 2024, the UK Department of Science, Innovation and Technology published a response[4] ("Government Response") to the White Paper on Artificial Intelligence Regulation ("White Paper"). The position taken in the Government Response appears to be that the AI regulation is aimed at fostering innovation and ensuring security.
The Government Response adopts a context-based approach to AI regulation. As such, there is no formal and general definition of AI within the UK. It is aimed to establish rules that apply to all AI technologies, regardless of how they are used, and unnecessary general rules are avoided. As can be seen in paragraph 10 of the Government Response, the primary objective is to ensure that 5 cross-sectoral principles are observed in AI regulations: Safety, security and robustness; appropriate transparency and disclosure; fairness; accountability and governance; competitiveness and compensation.
It is also planned to establish a risk-focused central government coordination function that will monitor and assess all risks in the economy and support the legislation to address existing legal gaps.
In the Government Response, AI risk categories are divided into three groups: societal harms, risks of misuse and risks of autonomy. Societal harms include risks such as labor issues, privacy, bias and discrimination, and intellectual property protection; misuse risks include risks such as election interference, cyber-attacks and crime, and AI-based weapons; and autonomy risks primarily include risks such as advanced AI systems that may escape human control.
Due to the high risks associated with General Purpose Artificial Intelligence systems, which have a limited number of developments in the UK, more obligations expected to be imposed, but these obligations are not aimed be at a severity level that would hinder innovation.
Before the production and release of Artificial Intelligence systems to the market, developers must cooperate with the UK Artificial Intelligence Safety Institute and qualify in a number of tests.
In terms of Open Source AI systems, the attitude is that while the open availability of AI systems to the public is beneficial for innovation, strong General Purpose AI systems, especially those that are open source, should be subjected to capability testing and risk assessment before they are released to the market.
- The Approach of the United States and the Current Status
Instead of the holistic approach taken by the European Union and the United Kingdom, the United States has adopted a sectoral approach to AI regulation. It is also seen that the US, which prioritizes human rights, international cooperation, and democratic values, has adopted a slower and gradual approach in terms of artificial intelligence regulations.
The most recent legal instrument under US law is the Executive Order[5] of October 30, 2023. Building on existing regulations, the Executive Order broadly addresses the safe development and use of artificial intelligence systems.
The Executive Order, which is not a direct legislative product and does not have the force of law, contains directives and recommendations for many federal departments and various other organizations involved in the artificial intelligence ecosystem. The Order draws attention with its new security standards for the developers of powerful artificial intelligence systems. In this context, these developers are required to conduct tests in accordance with the new standards and share the results of these tests to ensure the security of their systems. These tests should identify measures that can be taken against possible cyber-attacks and malicious use. Developers are also expected to develop artificial intelligence tools to detect and repair vulnerabilities in their software.
Another aspect of the Order that draws attention is that it specifically aims to eliminate the risks of violations of the constitutional right to privacy. In this regard, it also includes a number of obligations and recommendations for developers. These obligations are particularly directed at organizations that collect and/or process users' data and require transparent assessments to be made in the data collection and processing process. It also provides for the necessary government support for the development of methods to protect privacy.
The Executive Order also declares that as artificial intelligence systems continue to develop, fundamental rights and notions of equality should not be negatively affected by these emerging artificial intelligence systems.
- The Approach of Türkiye and Current Status
There are no specific regulations on artificial intelligence systems in Turkish law. However, in 2019, within the scope of government policies, the Department of Big Data and Artificial Intelligence Applications was established within the Presidential Digital Transformation Office. The National Artificial Intelligence Strategy 2021-2025[6], which sets out Türkiye's long-term roadmap on artificial intelligence for the coming period, was put into practice with the Presidential Circular numbered 2021/18. Although the need for legislation on the subject is emphasized in this document dated 2021, there is no concrete regulation on artificial intelligence systems yet.
In addition, on January 19, 2024, the Personal Data Protection Authority published an information note to raise public awareness on "deep fakes" that occur through the use of artificial intelligence systems. This document holds importance as it contains the risks that deep fakes may cause and the measures that can be taken against them.
- Conclusion
In the light of these examinations, it is seen that the European Union is a pioneer in the legalization process for artificial intelligence systems. This is because there is an Artificial Intelligence Act that has already been approved and is expected to enter into force.
From the perspective of the US and the UK, it is seen that in these countries, legally binding legal texts have not yet been established, and the problems related to artificial intelligence systems are mostly handled by the executive branch. In Türkiye, although there is activity in this field by the executive branch, no concrete legal steps have been taken yet.
[3] The EU Artificial Intelligence Office was established by the European Commission on January 24, 2024.