The European Approach to AI

Science and Technology - June 26, 2024

Is Europe doing enough to govern the process of incorporating Artificial Intelligence into our daily lives? This is a question to which we are strongly urged to give an answer, especially at the turn of a new course for Europe dictated by the European Parliament elections.

In the meantime, we must start from the assumption that AI is not a completely new technology that is appearing on our daily lives for the first time in recent months. It is a technology that has, in different ways, already been used for several years. Instead, it is the availability of a large computing capacity (due to technological advancement in the hardware field), together with the presence of huge amounts of data available online (needed to ‘train’ AI), that has allowed this technology to explode.

Over time, information about AI has moved from scientific articles to the general press. This has led to the pursuit of sensationalism and clickbait articles outlining doomsday scenarios or unprecedented (and often to be scrutinised) capabilities of AI. The shift of information on AI to the mainstream press has brought this issue into the everyday life of European citizens, who are mainly attracted by the ease of use of certain platforms and the possibility of achieving significant results, in terms of content creation, with little or no effort.

There has been no shortage of criticalities in the use of these technologies, from the phenomena of scams and misleading advertising using the deepfake tool, to real cases of the dissemination of pornographic images created by AI reproducing the features of famous people. Let us also not count how AI has entered our schools and universities in a big way, and how our teaching staff is intent on finding solutions to identify when AI has helped in the production of a student’s work.

So, is Europe doing enough? Let’s start at the beginning of the debate: the European Council in October 2020, when European leaders opened the discussion on the digital transition. There, it should be recalled, the Commission was invited to find solutions for increasing public and private, European and national investments in AI and its promotion.

In April 2021, the Commission proposed an initial regulation on AI in order to seek harmonisation of the rules concerning this technology. The goal of increasing trust in AI and its development is also expressed.

The Council’s position on the regulation is approved in December 2022. At that time, we start talking about the need for AI placed on the EU market to have safety standards and to comply with fundamental rights rules, as well as the EU value system. Thus, months of negotiations are needed to reach an agreement on the regulation between Council and Parliament one year later (December 2022). This agreement will be applied from 2026.

The final green light for the regulation arrives on 21 May 2024, with the Council – by approving this act – setting a worldwide standard for AI regulation for the first time. But is this enough?

The Italian government is in fact driving forward the debate on these issues, especially from the point of view of the responsible use of this technology and the fight against harmful uses. Think for instance of copyright and its application to content generated by AI starting from data available online (probably also covered by copyright). The will of the government is to support, first and foremost in Brussels, the enhancement of the rights of those who produce content. Also supported are ethical principles that put man at the centre and not technology and its development. What is perhaps most needed in this context is attention to avoid degeneration and immoral uses of technologies that, if reasonably regulated, can really help the everyday life of citizens. From the national front, and then also moving to the European level, what seems necessary in this field is also an updating of copyright laws. Unfortunately, in this respect, the challenge seems unequal, especially if we think of the length of the legislative process (both national and European) in relation to the leaps forward that technology can make in just a few months.

This is a real challenge that, in the task of national governments and European institutions, must be turned into an opportunity for growth – not only economic. All this poses great responsibilities for Europe, which can’t help playing a leading role at this juncture by limiting the risks of a landscape that is in many ways still completely unknown. We are probably facing a turning point in human history and technology. A turning point on par with the invention of printing, the introduction of steam into production processes or the invention of the internal combustion engine. A real paradigm shift that must find us ready as nations and as the European Union.

A first concept that the EU should embrace is that of governing change. The introduction of AI into the everyday life of citizens will necessarily lead to changes. Whether social, cultural or political, these need to be channelled into processes that are governable, predictable and steerable. In this sense, the challenge for the EU is to create a kind of control centre that can analyse the risk factors brought about by AI. This must take place both in the context of the very rapid technological advancement to which we have become accustomed and in the analysis of the medium and long-term processes and changes that are affecting our society. Analysis and foresight are therefore the essential factors for protecting citizens in all those fields that pertain to the personal and public sphere of the individual: from the protection of jobs threatened by the (not always real and true) potential of AI, to the strenuous defence of the fundamental rights of individuals, investigating the security and risks of the introduction of AI in some fields more sensitive than others.

At the beginning of this article, we talked about how information about AI has shifted over time from scientific articles to the mainstream press and how this has impoverished knowledge about this technology. One of the elements to focus on at the European level should in fact be a pooling of knowledge through the creation of a common research and development pole on this technology. Leaving the primacy to private companies is definitely not the best way to protect the interests of citizens from the point of view of personal rights and data protection. This technology, which is so important that it can easily be considered a game changer in so many different areas, can only be studied in a shared way within the European institutions.

At the same time, one cannot fail to support companies and start-ups that want to invest in research and development in the field of AI. Leaving these hotbeds of ideas out in the international market can be more of a risk than a benefit. Private capital may have different interests than the expectations of the public and the EU. The risks are that, in the name of business, opaque uses of this technology will be normalised, maybe even eroding the barrier erected by the EU on the rights front.

Finally, we cannot forget that the EU is made up of people, and citizens must be enabled to make the best use of new technologies. The 2023 figures released by the European Commission on digital literacy speak of an average digital literacy rate of 6.2 out of 10.  This makes one realise how important it can be to create digital literacy campaigns aimed at training citizens in the correct use not only of normal IT means, but above all of modern AI-related technologies. A path that would be very useful starting from the earliest stages of education and gradually leading to increasingly specialised knowledge and skills.