Unprecedented global legislation was agreed at EU level last year, making Europe the first continent to impose rules on the use of artificial intelligence – AI. The draft law was presented by the Commission in April 2021 and adopted by the European Parliament two and a half years later. The regulatory process was influenced by the advent of ChatGPT – the text generator developed by a Californian company, which highlighted, beyond the potential of AI, the risks of its use – and was boosted by the momentum of the deep fake phenomenon – i.e. identity theft, which relies on video manipulation technology.
The form of the text of the law should be approved sometime in April by the European Council, and then Member States have two years to implement it. It remains to be seen how many of them will manage to meet the deadline, as some – for example Romania – are only now adopting national AI strategies. But one European country has got ahead of everyone – including Brussels – and has already set up an agency to oversee AI. This is the case in Spain, which has started to take steps to set it up long before the EU legislation came into force in 2022. Thus AESIA – Agencia Espaniol de Supervision de la Inteligencia Artificial – beyond its internal role, will be able to assist Brussels bodies to implement AI legislation faster and more efficiently.
European law defines for the first time high-risk systems and subjects them to strict rules
European legislation in the field of AI has been designed around the need to maintain a balance between protecting democracy and fundamental human rights and encouraging research and innovation and investment in the field. It starts from classifying AI systems according to risks and gradually imposes rules according to these risks, up to a total ban on some of them.
In other words, the riskier an application, the stricter the rules. Thus, content recommendation systems or spam filters will have easier rules to comply with and will be considered “limited risk”. They will essentially only have to tell users that they are powered by AI. Other “high risk” AI-created systems will be subject to stricter control. Medical devices, for example, will have to provide higher quality data and will have to provide clear information to users.
Certain uses of some of these “high-risk systems”, specifically those that rely on biometric recognition, such as social ratings that regulate people’s behaviour, emotion recognition at school or in the workplace, and even some types of predictive policing, are banned altogether. China’s facial recognition systems also fall into this category, which is at odds with the principles of liberal democracies. But even in a democracy some exceptions are necessary for serious crimes such as kidnapping or terrorism. In these cases, it will be allowed to scan people’s faces in public using remote biometric identification systems.
The package of laws on AI provides for the supervision of the application of these rules, with an office to be set up within the European Commission. There are also hefty fines for companies that fail to comply, estimated at 7% of turnover or €35m. This European legislation is a great victory in the fight against the dangers of irresponsible use of AI-based technologies and a huge step forward compared to the approach overseas. In the US, safety in the field relies on a joint commitment by major companies in the field – including Google, Meta, Amazon, Microsoft and OpenIA – to implement measures to flag AI-generated content.
Romania throws itself into the war on deep fake with a bill that doesn’t ban it
Regulating this area is all the more important as video manipulation technology has become increasingly sophisticated and more accessible to the public. Deep faking, originally used to create non-consensual pornographic content, has in just a few years become a powerful weapon in misinformation and manipulation of public opinion. There is thus a justified fear across the European political spectrum that artificial intelligence-based technology could be used in smear campaigns, to influence election campaigns or to impose important policy measures. Among the first personalities to fall prey to such practices were former US presidents Barack Obama and Donald Trump, and victims included the late Queen Elizabeth II and Pope Francis. More recently, the deep fake has also hit the European periphery, with doctored images of Moldovan President Maia Sandu making the rounds on the internet.
Against this backdrop of concerns about combating deep-fake, a draft law has emerged in Romania – a country that currently has no national AI strategy – that does not ban such images but only requires flagging of such images. According to the initiators, “the public must be informed about fake content”. So far, the draft legislation has neither been adopted nor rejected.
The government in Bucharest does not yet have an AI strategy
The Ministry of Digitisation recently presented an updated version of the draft AI strategy for 2019. To show that it is serious about its work, the head of Digitisation has also prepared the Decision by which the strategy will be adopted by the Government. The adoption was initially planned for 2020, and since then and until now, an AI committee has been set up to gather “initiatives in the field”, there have been consultations with companies and research entities, there has even been a joint working group of the ministry with a university (the Technical University of Cluj, a city that has become the main pole of IT development in Romania) and, finally, the Ministry of Research has developed “a strategic framework”. After several changes of portfolio holders in the last governments in Bucharest, the project was taken over last year by the young minister for digitalisation, Bogdan Ivan.
At the launch of the strategy earlier this year, he gave assurances that the document included proposals from the Romanian Committee for AI, which in turn took into account the results of consultations with experts. The aim of the strategy, in short, would be “to protect the rights of Romanians in the online environment”. At the same time, the Romanian minister stressed that the strategy is inspired “by concrete actions” proposed at EU level, but is “firmly anchored in the specific realities and needs of the Romanian context”.
The explanation for this anchoring in the “Romanian needs” is that the Romanian strategy in AI was developed on … European funds, obtained for the implementation of the “Strategic Framework for the Adoption and Use of Innovative Technologies in Public Administration 2021 -2027”, financed by the EU Operational Programme for Administrative Capacity 2014-2020.
Romania’s “remarkable” results of “achievements” in the field of preparedness in the implementation of AI technologies in government are noted in the Oxford Insights Index, in which Romania ranks 64th in the world and last in Eastern Europe. The region is far behind Western Europe in this respect, according to the ranking compiled by the British company that advises governments and companies on AI opportunities. In the report, which looks at 193 countries, three of the EU member states in Western Europe are in the top 10, led by the US – France (6th), Germany (8th) and the Netherlands (10th).
The Spanish lesson on how one Member State can provide concrete help in implementing EU-wide legislation
One country that is faring much better than other members of the EU bloc is, as mentioned above, Spain. Even though in this ranking it is 17 places behind the last in the top 10, at 27th place, Spain is the first EU country to have already set up an agency with a role in AI lawmaking. It already has an office, in A Coruna, the financial and industrial centre of Galicia. It also has an initial budget of €5m and a staff of 23, with plans to add more this year, according to the Madrid government.
AESIA will be the key player in managing and leading the Spanish AI ecosystem and until the European legislation comes into force it will carry out voluntary oversight of the field, by liaising with the European ecosystem and generating regular test environments for high-risk AI systems. Thus, AESIA has already started work with a sandbox dedicated to Spanish companies. Sandboxes are test environments that are meant to show how something – for example a technology – can work before launch. But they are also useful for legislation. And this is exactly what AESIA will do, to provide the conclusions of practical exercises that will facilitate possible changes to legislation before implementation.