fbpx

EU Artificial Intelligence Office Is Established

Legal - March 18, 2024

The Office will supervise the implementation of the AI Act

In February 2024, the European Commission’s decision came into force, through which the European Artificial Intelligence Office was set up within the Directorate General for Communication Networks, Content and Technologies (DG CNECT), as a sort of internal agency within the Commission itself. It certainly represents an important novelty and a demonstration of the EU’s interest in this matter; however, its functions appear to have been scaled down compared to the initial wishes of the European Parliament, which had envisaged the creation of a body that could centralise the application of all AI standards.

The Artificial Intelligence ACT: what it is and what it does

The Office will have the task of facilitating the implementation of the Artificial Intelligence Act, which will represent – when finally approved – the world’s first regulation on the use of AI and which is part of the EU Digital Strategy, through which the European Union wants to ensure a more careful and considered development and use of this new technology. The regulation is rooted in the European Strategy on Artificial Intelligence, which was presented back in 2018, and the AI White Paper, presented in 2020, and represents the European Union’s choice to intervene on the topic first and incisively, in order to influence and not be subjected to the development process of the governance of the technology.

Artificial Intelligence (AI) today represents one of the great challenges facing mankind, due to its immense potential – which some even call revolutionary – but also with great risks. The use of AI in fields such as medicine, education, and engineering opens up the possibility of great and rapid transformations that can bring significant improvements in the lives of all of us. However, it is necessary not to overlook the danger to individuals and society as a whole that the use of such technology can have where it is not properly regulated.

The first legislative framework aimed at regulating artificial intelligence and its use was presented by the European Commission in April 2021, but it was not until December 2023 that the European Parliament and the European Council managed to reach a provisional agreement.

Following an initial go-ahead of the text by the Council of the European Union, the European Parliament had voted on its own proposal, which was adopted by a large majority, with 499 votes in favour, 28 against and 93 abstentions, and which included the extension of the list of high-risk applications and prohibited AI practices. The text was the basis for negotiations between Parliament, the European Commission and the European Council.

The draft regulation already contains proposals concerning the calculation of possible sanctions to be imposed on companies that contravene the rules set by Brussels.

Risk assessment and principles for a good law

The AI Act aims to foster the development of AI in all 27 member states, but with clear principles and rules that must be followed, not least to prevent and address the potential risks that this new technology poses in terms of security and rights. AI systems that will be deemed dangerous to the security, livelihood, and fundamental rights of European citizens and all people (on a scale of 4 levels, identified as ‘minimal risk’, ‘limited risk’, ‘high risk’, and ‘unacceptable risk’) will be prohibited. A risk classified as unacceptable and therefore prohibited is, for example, the social score used by the government of China; unacceptable is also considered the psychological conditioning exercised by certain advertising messages.  A high risk is considered, for example, the selection of personnel, or operations carried out by means of assisted surgery. A low risk is considered, for instance, chatbots. Finally, video games and antispam systems are considered a low risk.

But what are the crucial points for the enactment of good, final legislation? Certainly, in its final version, the regulation should first and foremost address the issue of security, in order to set quality standards to prevent damage from misuse or possible malfunctioning of AI systems towards users. It is also important to clarify who will be legally responsible for the choices made (and thus for any mistakes made) by AI, so that citizens can be guaranteed in the event of damage. Another key factor is to provide that the choices made by AI are transparent and understandable to humans. Last but not least, of no less importance is the issue of privacy, which increasingly appears to be a key area to be protected.

The tasks of the AI Office

The Office will have, first and foremost, the task of drawing up guidelines and guidelines that can facilitate the clear use of the relevant legislation. The Regulation makes it clear that the Office will not replace or in any way override the specific competences of the Member States and national authorities in charge, but should be understood as a supporting and guiding body.

The Office will therefore work at national and EU level to facilitate the harmonisation and enforcement of AI legislation, monitoring so-called General-Purpose AI systems (GPAI), i.e., systems that use AI, and developing methodologies for evaluating patterns of use (e.g. OpenAI’s GPT-4, which is responsible for the chatbot ChatGPT).

The Office will also have to manage the rules to be applied under the new regulation with respect to artificial intelligence systems that are already subject to other EU regulations, such as the Digital Service Act on social media.

A brief overview of the international context

While the European Union works towards the final enactment of the Artificial Intelligence Regulation, other international bodies are also focusing their attention on the need to control the development and governance of this new technology and its application.

The United Nations Educational, Scientific and Cultural Organisation (UNESCO), that is a specialised agency of the United Nations with the aim of promoting peace and security in the world, issued the ‘Recommendation on the Ethics of Artificial Intelligence’, adopted by all 193 member states. UNESCO wanted to focus on four core values, linked to ten general principles that safeguard the human being in the ethical use of AI.

At the level of the United Nations, Secretary-General Guterres set up a high-level committee on digital cooperation to facilitate a constructive dialogue on the possibilities offered by technology to advance human well-being, while of course addressing risks and preventing harm.  Some of the recommendations included in ‘The age of digital interdependence – Report of the UN Secretary-General’s High-level Panel on Digital Cooperation’ emphasise the need for intelligent systems that can explain their decisions. Another theme that also emerged here is the importance of being able to clearly identify whose responsibility it is to use them, in the event of damage.

The UN Deputy Secretary General Amina J. Mohammed, in her speech at the event ‘Advancing the Global Goals with artificial intelligence’, also addressed the subject of the development of artificial intelligence, describing it as one of the most important advances of our time, for example because of the use that is already being made of it in certain strategic sectors such as industrial automation. However, the deputy secretary also wanted to highlight the risks associated with unethical and unregulated use of AI, such as the increase in inequality or the danger of manipulation of political processes and information.