Artificial Intelligence Act Adopted by the European Parliament

Science and Technology - March 28, 2024

The European law on artificial intelligence, known as the AI Act, a world first, was recently adopted by the European Parliament by a large majority. The law is the result of more than three years of concern in Brussels to address new challenges in the field, which have emerged over the past decade at a rapid pace.  With the unprecedented development of AI systems, which has also brought a number of risks, it has given rise to undesirable phenomena such as deep fakes. The text of the law has been designed to both protect the fundamental rights of citizens, democracy and the rule of law, but also to stimulate innovation and research, with the stated aim of establishing Europe as a leader in the field. The new legislation introduces two levels of governance for law enforcement oversight – one national and one European, with both Member States and the EU creating new authorities for this purpose. In principle, Member States will have two years to implement its provisions after entry into force, but some of the law’s prohibitions will take effect half a year after adoption. 

In Romania, the law against deep fake is under parliamentary debate

Although Member States have so far taken only timid steps in this area, there have been attempts at national regulation in this field, including in Romania, where a draft law to combat deep fake has reached Parliament. Extremely controversial, some say it is modelled on the Chinese model – because it punishes deep fake with imprisonment – the bill has bogged down in parliamentary debate. But now that the AI Act has been adopted, the text could be improved or, as some argue, a new one could be drafted to meet the requirements of European law.

“The EU has kept its promise. We have linked the concept of artificial intelligence to the fundamental values that underpin our societies. However, much work lies ahead, which goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, our labour markets and the way we wage war. The AI Act is a starting point for a new model of governance built around technology. Now we need to focus on putting this law into practice,” said Dragoș Tudorache, Co-Rapporteur of the Civil Liberties Committee in the European Parliament.

Another co-rapporteur, Brando Benifei, also stressed the importance of setting up the new European IA Office, which “will help companies to start complying with the rules before they come into force”. 

The regulation is currently in the so-called rectification procedure and is expected to be adopted in the coming months, by the end of the parliamentary term. After that, it also needs formal approval from the European Council. It is expected to enter into force 20 days after publication in the Official Journal and will be fully applicable within 24 months from that date, except for the prohibitions on prohibited practices which will already have to be complied with after six months. Also, codes of practice must be ready after 9 months, general purpose rules, including governance, will have to be applied after 12 months, and obligations for high-risk schemes – after 36 months.

According to the European Parliament’s official press release issued after adoption, the regulation imposes obligations on artificial intelligence, depending on its potential risks and expected impact. Specifically, it bans some applications and imposes obligations on others. It also imposes heavy fines for non-compliance. The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems. Emotion recognition in workplaces and schools, social scoring, predictive policing (when based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be banned.

General purpose AI (GPAI) systems and the GPAI models on which they are based must meet certain transparency requirements, including compliance with EU copyright law and the publication of detailed summaries of the content used for training. In addition, artificial or manipulated images, audio or video content (“deep fakes”) must be clearly labelled as such.

The EU law defines high-risk systems for the first time and subjects them to strict rules. It starts by classifying AI systems according to risk and gradually imposes rules according to risk, up to a total ban on some of them. In other words, the riskier an application, the stricter the rules. Thus, content recommendation systems or spam filters will have easier rules to comply with and will be considered “limited risk”. They will essentially only have to tell users that they are powered by AI. Other “high-risk” AI-created systems will be subject to more rigorous scrutiny.  Thus, certain uses of some of these “high-risk systems”, namely those based on biometric recognition, based on sensitive features and the non-targeted collection of images from the internet or CCTV footage for the creation of facial recognition databases are prohibited. For example, social assessment that regulates people’s behaviour, emotion recognition at school or in the workplace, and some types of predictive policing are prohibited. As exceptions, they are allowed for serious crimes such as kidnapping or terrorism. In these cases it will be allowed to scan people’s faces in public using remote biometric identification systems.   For other high-risk AI systems – which have a significant potential to harm health, safety, fundamental rights, the environment, democracy and the rule of law – certain obligations are foreseen. 

Citizens have the right to complain about AI systems

Examples of high-risk AI uses include critical infrastructure, education and training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems need to assess and mitigate risks, maintain usage logs, be transparent and accurate, and provide human oversight. Citizens will also have the right to lodge complaints about AI systems and to receive explanations about decisions based on high-risk AI systems that affect their rights, according to the European Parliament’s official press release. The package of laws on AI provides for the supervision of the application of these rules, with an Office to be set up within the European Commission. Fines of 7% of turnover are also foreseen for non-compliant companies.  This European legislation is a big victory in the fight against the dangers of irresponsible use of AI-based technologies and a huge step forward compared to the approach overseas. In the US, safety in the field relies on a joint commitment by major companies in the field – including Google, Meta, Amazon, Microsoft and OpenIA – to implement measures to flag AI-generated content. 

Deep fake videos used for online scams

Earlier this year, Romania’s Energy Minister Sebastian Burduja announced that he would file a criminal complaint after his face was used in a deep fake video. The governor of the National Bank of Romania, Mugur Isărescu, was also the victim of such online manipulation. In both cases, an official’s face was used to promote non-existent investments. Immediately afterwards, a legislative initiative was tabled in the Bucharest Parliament by two Romanian MEPs to combat this phenomenon of deep fake, which provides for prison sentences for those found guilty of making such videos. The draft legislation has sparked much public debate, with some human rights NGOs arguing that such a move would violate freedom of expression. Other voices supported the legislative initiative, calling for harsh penalties in cases where such videos have a serious emotional or financial impact. The whole debate – beyond the rather heiropic way in which the draft legislation was born – basically reveals the need for regulation – including at national level – in this area.  After the form adopted by the Chamber of Deputies was radically different from that of the Senate – the latter imposing only financial penalties for the perpetrators of deep fakes, while the deputies went as far as a prison sentence – the draft was sent back to the specialised committees for consideration. Which, according to MEP Dragoș Tudorache, would be a good thing, because in his opinion, the adoption of the law can wait, and Romania should follow the IA Act’s lines when drafting it.

“I would say that the deep fake law in the Romanian Parliament could perhaps wait, because in fact, through the IA Act we are also regulating this area of responsibility for deep fakes. So basically, once is not an option. The law as it will come from the European level will be implemented in Romania or will have to be implemented. How difficult it will be, I repeat, depends on the capacity that the Romanian state will have to attract the right people for the role recognized by the law for these authorities (…)”, said Dragos Tudorache.

An open letter from Active Watch, supported by nine other human rights organisations, pointed out that the text contains ambiguities and terms or expressions that show a superficial understanding of the phenomenon. This view was also shared by the Legislative Council and the audiovisual regulator CAN.

“There are no exceptions to protect forms of freedom of expression, such as the use of deep fake content for pamphleteering or artistic purposes or in commercial productions (advertisements) or in the film industry, as is already the case in practice and as foreseen in the forthcoming European legislation,” charges Active Watch.