Palaute komission tekoälyasetusehdotuksesta
Palaute komission tekoälyasetusehdotuksesta
On tärkeää tukea eurooppalaisten tekoälyteknologioiden ja -sovellusten kehittämistä niin taloudellisesti kuin rakentamalla yhteentoimivaa ja horisontaalista huippuluokan eurooppalaista data- ja laskentaekosysteemiä. Samalla on varmistettava, ettei tekoälyn käyttö vaaranna eurooppalaisten turvallisuutta tai perusoikeuksia. Komission tekoälyasetusehdotus on kunnianhimoinen yritys tätä silmällä pitäen, mutta sen toteuttamiskelpoisuutta on vielä arvioitava ja tulkinnanvaraisuutta vähennettävä.
Asetuksessa käytettävä tekoälyn määritelmä ei voi nojautua olemassa oleviin teknologioihin, jotta koko asetus ei vanhennu aina teknologian kehittyessä; toisaalta komissiolle annettava mahdollisuus muuttaa määritelmää delegoiduilla säädöksillä heikentää oikeusvarmuutta. Määritelmästä onkin tehtävä väljempi, ja sääntelyn fokus tulee kohdistaa tekoälyn käyttötarkoituksiin, ei itse teknologiaan. Lisäksi sääntelyn tulee huomioida tekoälyn käytön konteksti ja asettaa rajoitukset sen mukaisesti, esim. tutkimuskäyttöä rajoittamatta. Sääntelyn riskiperusteisuus on hyvä idea, mutta korkean riskin tekoälyjärjestelmille asetettavien vaatimusten on oltava realistisia: esim. tekoälyn koulutusdatan (Art. 10.3) ei voi odottaa olevan täysin virheetöntä.
AI will have a significant impact on Europe’s competitiveness and wellbeing of citizens. It is crucial to support the development of European AI technologies and applications both financially, and by building an interoperable, horizontal ecosystem of European state-of-the-art data and HPC infrastructures. It is of utmost importance to make sure, that the use of AI does not jeopardise the safety or fundamental rights of Europeans. The AI Act proposal is an ambitious attempt towards this end. However, it must be carefully assessed to make sure that its provisions are feasible and that they do not leave too much room for interpretation. Human-centricity must be kept in mind as the key overarching principle for the development of AI. In addition, what is needed and what cannot be written in any regulation, is trust, transparency and collaboration between different stakeholders.
Data is the raw material for AI. As a phenomenon it is rapidly evolving and hard to anticipate. Thus, there is a risk of overregulation that may hamper innovation. Should AI regulation be made at all, systems falling within its scope cannot be defined by existing technologies. Not only would such definition become soon outdated, it would also create a loophole: if a certain technology is not included in the definition of AI in the Regulation, it does not have to comply with it. This creates problematic situations, where doing some things would be prohibited if they are done using one specific technology but allowed if they are done with another one. Also, if the EC will have the power to update the definition of AI whenever necessary, there is no legal certainty as any change to the definition would also change the scope of the Regulation.
In light of the above, the definition of AI systems must be made generic enough, to be able to include emerging technologies without having to update the Regulation. The aim must be to regulate certain purposes for which technology is used, not the technologies as such. Regulation must also take into account the context in which AI is used and, for example, not limit the use of AI for research and innovation purposes, to avoid creating barriers for a flourishing data economy.
CSC welcomes the EC’s risk-based approach whereby most of the provisions of the AI Act only concern prohibited or high-risk AI systems. However, such classification requires that the definitions and requirements of the prohibited and high-risk AI systems are formulated clearly and precisely, to avoid leaving too much room for interpretation. Vague formulations, such as ‘psychological harm’ in Art. 5.1(a), open the door for subjective and even arbitrary interpretations of the Regulation.
It is crucial to make sure that high-risk AI systems do not become de facto prohibited ones due to impossibly strict requirements imposed on them. For example, the requirements for the quality of training, validation and testing data in Art. 10.3 must be designed so that they can be met in practice. The aim of avoiding biased data is valid but the requirement of the data to be entirely free of errors is unrealistic and must therefore be re-assessed.
Considering the role of data as the fuel of AI, the AI Act must be closely aligned with the EU’s data regulation making sure, for example, that individuals are informed about the purposes of which their personal data is used. In general, it is essential that the use of AI is transparent. This includes transparency of algorithms which is crucial for the human oversight of AI. It must be noted, however, that effective human oversight requires adequately skilled professionals. Therefore, competence development in all fields and sectors is a key aspect to ensure not only the development of state-of-the-art AI systems in Europe but also their appropriate use.