Development of emerging technologies requires enabling regulation – CSC gave feedback on the proposal for an AI Liability Directive

Development of emerging technologies requires enabling regulation – CSC gave feedback on the proposal for an AI Liability Directive

Emerging technologies, such as AI, are important drivers of Europe’s future competitiveness and wellbeing of citizens. In order to ensure that Europe will be at the forefront of technological development, the EU must create a coherent and enabling regulatory framework for the development of European AI technologies and applications, keeping in mind what the EU aims to achieve with new technologies in general. As data is the core ingredient of AI, the regulatory framework must be coherent with the aims for creating a flourishing data economy. Thus, both the AI Act and the AI Liability Directive (AILD) must be designed so that they do not create unnecessary barriers for the cross-sectoral re-use of data, or the development and uptake of AI in Europe.

In light of the above, CSC appreciates the staged approach adopted in the AILD proposal whereby a minimally invasive approach is used as the starting point. However, the EU must be careful to not hamper innovation with the regulation it creates. It is difficult to predict the future development of data and new technologies, and regulating them strictly might cause barriers for future innovations. The possibility of soft law instruments must be sufficiently explored. Furthermore, AI is only one of the emerging technologies, thus the scope of creating a new regulatory framework should, from day one, be wider and focus primarily on the purposes for which technology is used and their societal impact, instead of the technologies as such.

In order to make AI a European asset and to promote sustainable building of European competences, the EU must aim for technology-neutral regulation. It is particularly important to ensure that regulation does not hamper RDI efforts related to AI. Therefore, any regulation must concern only the final products and services, not the underlying research. Liability must also be solely with the developer (or user) of the final product, not with the researcher, especially if the developer has made use of research output (code) that the researcher or any other producer of openly shared components or data has published under free or open license.

Considering the tight linkages of the AILD proposal with the AI Act proposal, the impact of the AILD depends to a large extent on the content of the final AI Act. In particular, the definition of AI in the AI Act will be key for determining the scope of the overall regulatory framework. It is crucial to ensure that the definition is clearly defined and does not create legal uncertainty or loopholes.

In addition to a coherent and enabling regulatory framework, the development of AI in Europe will require sustainable funding, increased availability of data to “fuel” AI and growing supply of competent professionals for developing and overseeing AI technologies and applications, as well as data skills and competences in all fields of industry. Thus, the regulatory framework for AI must be closely aligned with the relevant funding programmes as well as EU policies related to data and competence development.