Required cookies

This website uses cookies necessary for its operation in order to provide the user with content and certain functionalities (e.g. language selection). You have no control over the use of these cookies.

Website visitor statistics

We collect visitor statistics on the use of the site. The data is not personally identifiable and is only stored in the Matomo visitor analytics tool managed by CSC.

By accepting visitor statistics, you allow Matomo to use various technologies, such as analytics cookies and web beacons, to collect statistics about your use of the site.

Change your cookie choices and read more about visitor statistics and cookies


With the Council of the EU and the European Parliament having adopted their positions on the Commission proposal, the negotiations on the AI Act have now moved on to the interinstitutional phase, the so-called trilogues, where the final formulation of the Act will be agreed. Let us outline our main expectations and recommendations for the final stretch of the negotiations.

First things first: what is AI?

One of the most controversial issues in the negotiations on the AI Act has been the very definition of AI. In the original Commission proposal, AI was defined with a list of broad technological areas to be included in the scope of the Act. The Parliament has suggested an even broader (and more vague) definition whereby almost any system “operating with a level of autonomy” could be interpreted as AI.

The Council’s proposal is more focused, not based on a long list of technology families (some of which may eventually be obsolete) but limiting the definition to complex systems using machine learning and/or logic- and knowledge-based approaches. This definition is broad enough to not require constant updating but narrow enough to only capture the areas of AI that are most likely to pose risks.

This would also make it clear that the Member States can still maintain the right to have national legislation regarding computer systems that do not pose such risks, and should therefore not be affected by the AI Act (e.g. Finland’s legislation concerning automated decision making in public administration).

Don’t kill research and innovation!

The AI Act will create a number of new obligations for the developers and users of AI systems, and the suggestions by the Council and the Parliament to include the so-called general-purpose AI in the scope of the Act would extend these obligations even further, e.g. to developers of open-source solutions.

To ensure that the obligations created by the AI Act do not kill all AI innovation in Europe, it is crucial to adopt the proposal that the Council has made to exclude R&D and non-professional purposes from the scope of the Act. Any regulation must concern only the final products and services, not the underlying research.

For Europe to benefit from the R&D investments, the regulation must fulfill two prerequisites. First, the border between allowed and banned applications should be made clear and understandable. Secondly, the certification procedure for AI-based products and services should be predictable, rapid, and affordable, so that the regulation would not hinder companies, especially SMEs, from benefitting from AI-based innovations.

The way forward: technology-neutral regulation and support for R&D

The difficulties with defining AI have clearly demonstrated how difficult it is to regulate specific technologies. In the future, we recommend keeping regulation as technology-neutral as possible while paying particular attention to not creating any legal barriers for research and innovation. Also, coherence between the various pieces of legislation related to the data economy is crucial.

An enabling regulatory framework must be paired with other measures to support research and innovation, such as investments in competence development and research infrastructures. In case of AI, particular attention must be paid to competences and infrastructures related to the two main building blocks of AI innovation: data and computing.

Heikki Ailisto

The author is a research professor at VTT where he coordinates applied AI research. He is also with the Finnish Center for AI flagship (FCAI), where he is a member of the Steering Group and leader of the Industry and Society Program.

Aleksi Kallio

The author is manager for AI & data analytics group at CSC. The group focuses on large scale computing challenges related to AI technologies, provides expert support for AI research as well as supports public and private organisations in adapting AI solutions.

Petri Myllymäki

The author is a professor of artificial intelligence and machine learning at University of Helsinki. He is working as the Vice-Director of the Finnish Center for AI flagship (FCAI).