fbpx

Artificial Intelligence Act: approval of the report on the proposed regulation. A first step towards greater protection of citizens in an increasingly digitialised age

Science and Technology - May 24, 2023

The definition of Artificial Intelligence (AI) reported the ability of a machine to reproduce characteristics belonging specifically to humans such as learning, reasoning, planning and creativity.

Artificial intelligence is becoming more and more sophisticated and is becoming an integral part of our society. Which of course exposes the community to real risks, such as the ability of AI to produce convincing fake news and manipulate public opinion, as AI extremely reproduces any voice and any other human characteristic, going so far as to make one doubt whether an image, a song or a text has been created by a person or is artificially produced.

Therefore, it is is crucial that the AI be regulated like any other sector. In spite of its many positives, in fact, all those negative aspects associated with it cannot and must not be underestimated, and an important action must be taken to ensure that artificial intelligence does not cause harm or infect our community in any broadly understood way. 

The EU has been dealing with this matter for some time now, having grasped from the outset its relevance at European and global level, whereby Artificial Intelligence must fully respect EU law and all fundamental freedoms.

In this context, in fact, the European Union has embarked on a path towards common legislation to ensure the proper functioning and development of the internal market, in the light of the ever-increasing use of artificial intelligence tools, thus carrying out a considerable amount of work in the AI sector.

On last 11 May, the so-called European Artificial Intelligence Act took another step forward with the approval of the European Parliament report on the proposal for a regulation to establish a legal framework on artificial intelligence in the European Union.
The Artificial Intelligence Act was originally proposed by the European Commission in April 2021. At the end of 2022, a so-called general approach on the legislation was then adopted by the European Council and the legislation is currently being discussed in the European Parliament, which will soon have to vote unanimously.

This report was adopted by 84 votes in favour, 7 against and 12 abstentions at the joint meeting of the Committees on the Internal Market and Consumer Protection (Imco) and on Civil Liberties, Justice and Home Affairs (Libe).

This draft negotiating mandate needs to be endorsed by the whole Parliament in the next session that will take place from 12 to 15. After the final approval of the Parliament, the negotiations with the Council on the final form of the law will begin.

To ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe, MEPs endorsed new transparency and risk-management rules for AI systems.

At the heart of the AI Act there is a classification system that determines the level of risk that an artificial intelligence could pose to a person’s health and safety or fundamental rights. The framework comprises four levels of risk: unacceptable, high, limited and minimal.

AI systems that carry a limited and minimal risk, such as spam filters or video games, can be used with few requirements (basically, it will be enough for them to fulfil their legal transparency obligations).
The previous Commission proposal defined high-risk systems as those applied, for example, to critical networks, employment, education and training, and essential public services. MEPs have now introduced an additional level and considered systems that may cause damage to health, security or fundamental rights to be high-risk as well. In fact, the Parliament extended the ban also to biometric identification software, i.e. systems that are deemed to pose an unacceptable risk. Thus, for example, real-time biometric surveillance systems in public spaces will be banned, with very few exceptions. In addition, the use of emotion recognition software is prohibited in the areas of law enforcement, border management, employment and education.

In order to allow the use of certain high-risk artificial intelligence systems, such as autonomous vehicles and medical devices, developers and users will be required to comply with regulations that require rigorous testing, proper documentation of data collection and storage, and human supervision of these machines.

On the subject of data processing, the report provides for more and better protection by introducing tighter controls on how providers of high-risk systems can process sensitive data, such as sexual orientation or political and religious orientation. In essence, no synthetic, anonymised, pseudonymised or encrypted data may be processed to process this type of information. And above all, the data obtained may not be passed on and must be deleted after the assessment of prejudice. Providers will also be required to document the reasons for data processing.

Very heavy penalties are envisaged for anyone who violates these rules. Companies, in particular, could have to pay fines of up to 30 million euro or equivalent to 6 per cent of the company’s annual global turnover. Sending false or misleading documentation to regulators is also prohibited.

According to the proposal, a European Artificial Intelligence Committee would also be established, which would oversee the implementation of the regulation and ensure its uniform application throughout the Union. The body would have the task of issuing opinions and recommendations on possible conflicts of law, as well as providing guidance to national authorities.

The aim is to give the green light and thus conclude the formal procedure on the regulation by 2024, which is an ambitious but crucial goal for the protection of all European citizens. 

Indeed, especially in the light of Europe’s desire to move towards a digital and green transition, artificial intelligence is an important tool to continue on this path. 

The ambitious European objectives, however, must not neglect the protection and defence of human beings, who could be the main victims of an inadequately regulated artificial intelligence syste

With this new proposal approved in committee, the main aim is to protect the individual in every aspect: sensitive data, family life, human rights, safety, and health.

And as the ECR Shadow Rapporteur in the Civil Liberties Committee Rob Rooken said: “The developments in the AI world are going very fast and will have a lot of impact on our lives. We are probably underestimating how big of an impact that will be. With the adopted AI Act today, the European Parliament has made an effort to protect the fundamental rights of EU citizens.”

Through the regulation of Artificial Intelligence, therefore, the aim is to make Artificial Intelligence systems useful to citizens, without exposing them to risks that could severely damage their lives, from private to public.

“In recent months, we have heard many extreme views on the implications of the widespread use of artificial intelligence and calls for distrust of this technology. I hope that this legislation will address these concerns, although it is clear that it will need to be reviewed and updated as it develops”, said Mep Kosma Złotowski, which also emphasized the enormous potential of artificial intelligence: “Artificial intelligence can help in many areas of life and in many sectors of the economy. It is worth investing in and improving this technology in the EU. If we are realistic about a shorter working week, we need to increase our productivity, and this is possible through the use of AI-based tools”. 

Indeed, making use of technology and Artificial Intelligence is important and a very useful action considering that we are living in an increasingly advanced world in terms of digitisation and technology. However, it must always be remembered that technology must be a support for humans and not a substitute for them.