Sunday, October 1, 2023

EU AI Act : Summary and Analysis

 

                                      EU AI Act : Summary and Analysis

                                                                                                    -Rishiraj Chandan

 

In this blog, we'll talk about the EU Artificial Intelligence Act, the first ever comprehensive attempt at regulating the uses and risks of this emerging technology.

Introduction

Artificial intelligence is here to stay, and these technologies are expected to bring a wide array of economic and societal benefits to a wide range of sectors, including environment and health, the public sector, finance, transport, home affairs, and agriculture. Already, AI is used to improve health care, optimize service delivery, efficiently manage energy consumption, and drive human progress in countless ways. Companies use AI-based applications to optimize their operations, but the deployment and use of AI entails both benefits and risks that trigger major legal, regulatory, and ethical debates. Indeed, the main concerns are privacy, bias, discrimination, safety, and security.

Given the fast development of these technologies, finding a balanced approach to regulating AI has become a central policy question in the EU, and the key issue is how to minimize risks and protect users without curtailing innovation and the uptake of AI. After many debates, studies, and impact assessments, the commission thought it had found the answer: the Artificial Intelligence Act. It's the first ever attempt to enact a horizontal regulation of AI, and the EU wants it to be an example for the rest of the world to follow. Now, what are the main points of this regulation?

Objective

The main objective is to lay down a common legal framework for the development, marketing, and use of AI products and services in the EU. Through this act the policy makers aim to make Europe world-class in the development of secure, trustworthy, and human-centred artificial intelligence, and of course, in the use of it. EU regulation addresses the human and societal risks associated with specific uses of AI, and this is to create trust. On the other hand, act’s coordinated plan outlines the necessary steps for member states that they should take to boost investment and boost innovation. All this to ensure that European member states strengthen the uptake of artificial intelligence in Europe. The new framework lays down different requirements and obligations for the development market's placement and use of AI systems based on a risk-based approach.

According to this pyramidal approach, AI systems presenting a clear threat to people's safety and fundamental rights would be banned from the EU market due to the unacceptable risks they pose. Talking about, for example, systems that exploit vulnerable groups or AI systems used by public authorities for social scoring purposes. A wide range of high-risk AI systems would be authorized for commercialization and use, but subject to a set of requirements and obligations, particularly on conformity, risk management, testing, data use, transparency, human oversight, and cyber security. AI systems presenting only limited risk, such as chat bots or biometric categorization systems, would only need to comply with basic transparency obligations. All other AI systems presenting only low or minimal risk could be developed and used in the EU without additional legal obligations. What could happen if similar technologies fall into different categories depending on their usage?

 Let's take facial recognition as an example. Facial recognition systems that are increasingly used to identify people can be very useful to ensure public safety and security but they also can be intrusive. The risk of algorithms making errors is high, so the use of those technologies can affect citizens fundamental rights and result in discrimination, in violation of our right to privacy. Even lead to mass surveillance and that's why the commission's artificial intelligence act wants to differentiate these systems according to their high risk or low risk usage. For instance, using real-time facial recognition systems in public places for law enforcement purposes would be prohibited with some exceptions because of the significant threat to fundamental rights. But a wide range of those technologies used for controlling borders or the access to public transports and supermarkets could still be a load subject to strict controls, safety requirements. Other of these technologies would be considered to have limited or low risk and therefore would be subject to even less stringent rules. But the difference between low risk and high risk usage is not always easy to make. So, how's the system going to work in practice?

There will be significant impacts particularly for providers of high risk AI systems. They would have to comply with a range of requirements to protect users safety, health and fundamental rights to be allowed to sell their AI products and services in the union. National market surveillance authorities would be responsible for ensuring that operators comply with the obligations and requirements for high-risk AI systems and to restrict or withdraw them from the market if they fail to do so. EU artificial intelligence board would be set up to facilitate the implementation of the new rules and ensure cooperation between the national supervisory authorities and the commission. However, experts warn that EU must first get the AI systems definition right.

Conclusion

All in all, I think that the proposal of the European Commission for an AI Act is well designed. However, what is problematic is that the definition of what is an AI system is very broad. It encompasses not only machine learning systems but potentially all kinds of software and therefore might lead to over-regulation. This is not the only concern. Another problematic point of the proposal is the enforcement because the proposal basically relies on high-risk AI systems based on the self-assessment of providers, and that is, of course, a very weak enforcement structure. There are no individual rights of individual citizens; there are no collective rights of citizen rights movements and consumer organizations; therefore, there is the danger that the regulation once it is enacted will not be enforced, and criticism is prevalent not just by one or two experts but many. Other experts and stakeholders are also calling for a number of amendments, including narrowing down the definition of AI systems to focus on those that could really pose a risk. Broadening the list of prohibited AI systems and ensuring proper democratic oversight of the design and implementation of European regulation of AI Member states have been reviewing the proposal for some time, and the European Parliament is also busy with it, but there's still a lot of work ahead if EU want to make sure the Artificial Intelligence Act will achieve its twin aims of ensuring safety and respect for fundamental rights while stimulating the development and uptake of AI-based technology in all sectors, and it's an enormously important task. The AI Act is the first of its kind anywhere in the world, and it may set global standards for the deployment of AI systems. It will regulate an incredibly broad set of technologies and tools across all sectors, so it's important to get it right.

No comments:

Post a Comment

  Why to protect IP in Fintech industry?                                                                                                  ...