AI in Aalto
Artificial Intelligence in Aalto
In a large organisation like Aalto University, there are different ways to interact with AI: you might be a user of Aalto approved AI systems like the majority of us, or you might be planning to take into use a new AI system, or you might be developing some new AI system or training an AI model. In this page we try to give you a quick overview of the AI Act and how it can affect your work at Aalto University.
The AI Act identifies high-risk and prohibited AI systems to guide us in understanding when we need to be more careful in developing or purchasing AI technologies. Table 1 is providing an overview of various types of AI systems as they are classified in the AI Act.
⚪ General-Purpose AI models | 🟢 Minimal/no risk AI system | 🟡 Limited risk AI system | 🔴 High-risk AI systems | ❌ Prohibited AI practices | |
---|---|---|---|---|---|
Examples of AI models or systems |
⚪ A (very large) AI model that can be used for various tasks and integrated into applications (e.g. LLMs) | 🟢 Spam filters, recommendation systems, spell checkers, translators, speech-to-text | 🟡 Chatbots, deepfakes, generative AI for purposes to inform the public on matters of public interest | 🔴 Profiling natural persons, biometric identification, determine access to education, learning evaluation, tools for recruitment (for a more detailed definition, see below AI Act relevant definitions section). AI system as a safety component of a product or AI system itself a product covered by other EU laws and evaluated by a third party (e.g. medical devices, toys, machineries…) |
❌ Deception, exploitation of vulnerabilities, social scoring, crime prediction, infer emotions in workspace or school, biometric categorization of special categories of personal data |
If you are a user of AI systems, there's no need to worry about the AI Act. Your focus should be on using tools that are approved by Aalto. As a community, it's crucial to share any issues you encounter—please report them to IT services or discuss them on Daily AI, Aalto Social. We will also be offering more training for staff on ‘AI literacy,’ as required by the AI Act. Be sure to explore the current AI training opportunities. If you're curious about the types of AI systems we use at Aalto, visit the 'AI in Aalto' page for more information
If you are developing an AI system or AI model or acquiring / deploying some existing AI technology you should consider if you are doing this for research purposes, or for other purposes. Continue reading below to understand how the AI Act affect your work.
What are the legal obligations outlined in the AI Act for research? How about ethics?
⚪ General-Purpose AI models | 🟢 Minimal/no risk AI system | 🟡 Limited risk AI system | 🔴 High-risk AI systems | ❌ Prohibited AI practices | |
---|---|---|---|---|---|
Legal obligations | No legal obligations. The AI Act does not apply! (Data protection and other legislations still apply of course). |
||||
Ethics recommendations | ⚪ Ethical pre-review is optional and left to the researcher’s discretion. | 🟢 Ethical pre-review not needed | 🟡 Ethical pre-review is recommended | 🔴 Ethical pre-review is recommended | ❌ Ethical pre-review is highly recommended |
The scientific research and development are exempted from the scope of the AI Act. There are no restrictions to research and development of AI systems or AI models, including their output, that are specifically developed and put into service for the sole purpose of scientific research and development. This exclusion has been necessary as the aim of the AI Act is to foster innovation, respect the freedom of science and not to undermine research and development activity. However, any other AI system used for conducting research and development activity remains subject to the AI Act.
Research and development activity needs to always comply with ethical and professional standards for scientific research and with applicable EU law. While there might not be mandatory rules for doing ethical pre-review when developing certain high-risk AI systems, we recommend Aalto researchers to seek advice and eventually apply for the concise ethics pre-review process. If you are in doubt, get in touch with [email protected] .
⚪ General-Purpose AI models | 🟢 Minimal/no risk AI system | 🟡 Limited risk AI system | 🔴 High-risk AI systems | ❌ Prohibited AI practices | |
---|---|---|---|---|---|
Legal obligations Please note that you might be a “provider” (you have created or modified an AI system and you made it accessible to others) or a “deployer” (you are installing an external AI system for others): different legal obligations apply. |
⚪ Information and transparency obligations. | 🟢 No obligations | 🟡 Information and transparency obligations. | 🔴 You need to follow various obligations as outlined by the AI Act. Get in touch with Aalto legal experts. | ❌ You cannot make these models available to others. |
Ethics recommendations | See Table 2 |
When the development of AI is no longer within the scientific research (e.g. you are collaborating with a company), and when an AI system or AI model is “placed on the market” or “put into service” as a result of such research and development activity, the AI Act will be applied to, inter alia, providers or deployers of such AI system or AI model.
As regards product-oriented research, testing and development activity regarding AI systems or models, the AI Act does not apply prior the products are placed on the market or put into service. However, testing in real world conditions will not be covered by that exclusion.
The AI Act does not apply to AI systems released under free and open-source licences, unless they are high-risk AI systems or an AI system that falls under prohibited AI practices or an AI system with transparency obligations. If you are unsure about which license to use or which outlet is the best for publishing your AI system/model, get in touch with [email protected].
In this section we have introduced some definitions from the AI Act. Expand the box below if you want to read more.
‘Provider’
'Deployer’
Prohibited AI practices
High-risk AI systems
Such AI system is not a high-risk if it:
Limited risk AI systems
AI systems with transparency obligations
Generating or manipulating images, audios or video content generating a deep fake
Generating or manipulating text for informing matter of public interest
Minimal/ no risk AI systems
If your task is not research, then you might have a few more legal obligations, as outlined in Table 3. You do not have to assess these obligations yourself, just get in touch with Aalto legal services.
Most of the above applies also to students: for educational purposes you can explore various AI systems, just be mindful if you are working on releasing an AI system/model for others to use. Students (and teachers!) are not left alone in this journey with AI technologies, get in touch with student services if you have questions, comments, or ideas on using AI for educational purposes. Please familiarize yourself with the guidance Guidance for the use of artificial intelligence in teaching and learning at Aalto University | Aalto University
We outline here a simple and practical summary of the AI Act and how it can affect some of us in the Aalto community. The AI regulatory landscape is still being clarified in EU and the rest of the world. Let’s remember that the main goal of the AI Act is to help us as European citizens and protect our human rights, not to create blockers or impediments to research, innovation, and education. Let’s keep the discussions ongoing and keep this page up to date with relevant materials.
Artificial Intelligence in Aalto
How to ensure research integrity and responsible conduct of research when using AI? How should we interpret the concepts such as reproducibility; bearing responsibility on the correctness of the results presented; respecting the authorship of others, and data protection?
Guidance for the use of artificial intelligence in teaching and learning at Aalto University.