fbpx

Artificial Intelligence, Commission Urges Governments to Appoint Council Members

AI Friend or Foe? - April 12, 2024

This week, the European Union plans to dispatch a communication to national governments, requesting the designation of their respective AI regulatory authorities, as announced by Roberto Viola, the Director-General for Digital Affairs at the European Commission, on April 3. The establishment of the European Office for AI was also declared by the Commission, which is poised to serve as the new EU-wide regulatory body for artificial intelligence, operating under the Directorate-General for Communications Networks, Content and Technology (DG CNECT).

This Office is tasked with the oversight, supervision, and enforcement of AI regulations pertaining to General Purpose AI Models and Systems (GPAI) throughout the 27 EU Member States. Its mandate encompasses the analysis of unexpected systemic risks, evaluation of capabilities, examination of AI models, and investigation of possible infractions or failures to comply. Additionally, the AI Office will initiate voluntary codes of conduct for AI developers, where adherence will imply conformity with established standards.

How the AI Council will work

The AI Office is set to play a pivotal role in enhancing international AI collaboration for the EU and in strengthening the ties between the European Commission and the academic sector, including the forthcoming Committee of Independent Scientific Experts. It aims to facilitate the collective efforts of the 27 Member States in enacting legislation, through actions like joint surveys, and will function as the administrative arm of the AI Council, a governmental forum for harmonization among national regulators. Moreover, the office will champion the creation of regulated experimental zones, enabling companies to test AI innovations within a regulated setting. It also aims to supply data, expertise, and support to small and medium-sized businesses to aid in regulatory compliance.

The AI Council, comprising a delegate from each Member State, the EDPS, and the AI Office (the latter two as non-voting observers), is tasked with ensuring uniformity and coordination in the enforcement by national competent authorities.
The AI Office will act as the Council’s secretariat, organizing meetings upon the Chair’s request and setting the agenda.
The Council will support the AI Office in guiding national authorities in developing regulatory sandboxes and in fostering cooperation and information sharing among them.

With the backing of the AI Office, national authorities of the Member States are charged with adherence to the AI Act, a piece of European legislation designed to govern AI through a risk-based framework. Member States are given a one-year timeline to establish their national AI regulatory bodies, which will collectively form the AI Board, an entity aimed at standardizing the enactment of the law across the EU.

The AI legislation, expected to be released in the Official Journal of the EU and to become effective in June, categorizes AI systems by risk level from low to high and was ratified by legislators last month. Prohibitions under the AI Act will be enforced by year’s end, while general AI regulations will be applicable from June 2025, and the stipulations for high-risk systems will be activated three years thereafter.

The recruitment drive for policy and technical roles within the AI Office is already underway, attracting a significant number of candidates and demonstrating keen interest in the field. The selection of the AI Office’s director will commence only after the full adoption of the AI Act.

The AI Act, marking the first detailed AI regulation by a major regulatory body, will sort AI applications into three levels of risk. Unacceptable risk applications, like state-operated social scoring mechanisms akin to those in China, will face prohibition. High-risk applications, such as resume screening tools, will need to fulfill specific regulatory criteria. Lastly, applications not categorically prohibited or deemed high risk will face minimal regulation.