What is the EU AI Act?
The EU AI Act sets harmonised rules for the development, placement on the market and use of AI systems in the European Union, following a proportionate risk-based approach.
The Act lays down a solid risk methodology to define “high-risk” AI systems that pose significant risks to the health, safety or fundamental rights of persons. Those AI systems will have to comply with a set of horizontal mandatory requirements for trustworthy AI, and follow conformity assessment procedures before those systems can be placed on the EU market.
Clear obligations are placed on providers of AI systems, to ensure safety and respect of existing legislation protecting fundamental rights throughout the whole AI systems’ lifecycle.
The rules will be enforced through a governance system at Member States level, and a cooperation mechanism at Union level with the establishment of a European Artificial Intelligence Board.
Measures are also proposed to support innovation, in particular through AI regulatory sandboxes and other measures, to reduce the regulatory burden and to support Small and Medium-Sized Enterprises (‘SMEs’) and start-ups.
A very important development: The placing on the market, putting into service or use of certain AI systems intended to distort human behaviour, whereby physical or psychological harms are likely to occur, is forbidden. Such AI systems deploy subliminal components that individuals cannot perceive, or exploit vulnerabilities of children and people due to their age, physical or mental incapacities. They do so with the intention to materially distort the behaviour of a person and in a manner that causes or is likely to cause harm to that or another person.
Deadlines:
The AI Act entered into force on 1 August 2024, and will be fully applicable on 2 August 2026, with some exceptions:
1. Prohibitions and AI literacy obligations entered into application from 2 February 2025.
Note: Prohibited AI Practices - The AI Act bans eight specific AI practices (below) that pose severe risks to fundamental rights, democracy, and public safety.
Note: AI Literacy Obligations - The AI Act requires the promotion of awareness and understanding of AI systems. The goal is to ensure that citizens, businesses, and policymakers can make informed decisions about AI use.
2. The governance rules and the obligations for general-purpose AI models become applicable on 2 August 2025.
3. The rules for high-risk AI systems - embedded into regulated products - have an extended transition period until 2 August 2027.
The AI Act prohibits eight practices:
1. Harmful AI-based manipulation and deception.
The AI Act bans AI systems that manipulate human behavior in a way that causes physical or psychological harm. This includes AI that uses subliminal techniques to influence people’s decisions in ways they would not consciously agree to.
Subliminal techniques refer to methods of influencing human thoughts, emotions, or behaviors below the threshold of conscious awareness. These techniques work by delivering stimuli that are not explicitly perceived by the individual but can affect decision-making, attitudes, and actions.
Examples include the AI-powered social media algorithms designed to addict users by exploiting cognitive biases, and AI that subtly alters online content to push political agendas or misinformation.
2. Harmful AI-based exploitation of vulnerabilities.
The Act prohibits AI that exploits vulnerabilities due to age, disability, or socio-economic situations. This is designed to protect children, the elderly, and individuals with disabilities from AI-driven manipulation.
Examples include the AI-powered toys that coerce children into making purchases, and AI-driven scams targeting elderly people to manipulate them into giving away personal data or money.
3. Social scoring.
The AI Act prohibits social scoring systems, which assess people’s trustworthiness or behavior based on their social actions. The concern is that such systems can lead to discrimination and excessive government surveillance. The EU wants to prevent AI from being used to limit people's rights based on past behavior.
4. Individual criminal offence risk assessment or prediction.
AI systems cannot be used to predict a person’s likelihood of committing a crime based on profiling, behavioral analysis, or past data.
Example: An AI model that assigns individuals a risk score for committing crimes based on zip code, race, or personal background.
5. Untargeted scraping of the internet or CCTV material to create or expand facial recognition databases.
The AI Act bans the mass collection of images and biometric data without consent. Governments and private entities cannot use AI to indiscriminately scan and store facial images from social media, CCTV footage, or websites.
Example: An AI system that scrapes LinkedIn and Facebook images to create a facial recognition database without user permission.
6. Emotion recognition in workplaces and education institutions
The AI Act prohibits AI that analyzes emotions in workplaces to assess performance or behavior. The use of AI to determine whether a student is paying attention in class or if an employee is in a good mood at work is considered intrusive.
Example: An AI tool in job interviews that rejects candidates based on facial expressions rather than actual skills.
7. Biometric categorisation to deduce certain protected characteristics
AI cannot use biometric data (e.g., facial recognition, voice recognition, fingerprints) to infer characteristics such as race, political views, religion, sexual orientation, health conditions. This prevents AI from reinforcing discrimination or enabling surveillance based on sensitive personal attributes.
8. Real-time remote biometric identification for law enforcement purposes in publicly accessible spaces
The Act prohibits law enforcement from using live facial recognition (real-time biometric identification) in public spaces under many circumstances. Exceptions include the search for missing persons, the prevention of imminent terrorist attacks, and the identification of criminals with court approval.
Must we comply with the AI Act AND the NIS 2 Directive?
The NIS 2 Directive and the AI Act regulate different but overlapping areas of cybersecurity and artificial intelligence (AI). While they serve distinct legal purposes, there are key intersections in their application, particularly concerning cyber resilience, risk management, and the governance of AI-driven critical infrastructure.
Certain AI systems used in critical infrastructure, digital services, and cybersecurity operations fall under both legal frameworks, requiring compliance with both cybersecurity and AI-specific risk management rules. For example, AI systems used in network security monitoring, intrusion detection, fraud prevention, and financial transactions must comply with both NIS 2 cybersecurity requirements and AI Act safety and transparency rules. The AI Act designates certain AI systems as “high-risk” if they impact safety and fundamental rights.
Under NIS 2, operators of essential and important services must report cybersecurity incidents to national authorities. The AI Act also requires reporting for incidents where AI causes harm to individuals, fundamental rights, or critical infrastructure. If a cyber attack occurs due to a vulnerability in an AI-based system, both the AI Act and NIS 2 Directive ask for reporting. AI providers must also notify authorities of AI system malfunctions that could compromise cybersecurity.
NIS 2 requires companies to assess third-party risks in their supply chains. The AI Act mandates that high-risk AI systems undergo conformity assessments and transparency checks before deployment. AI-powered cybersecurity tools, cloud services, and IoT systems must be assessed for both, cybersecurity risks (NIS 2) and AI-specific risks (AI Act).
What about DORA and the AI Act?
The Digital Operational Resilience Act (DORA) regulates financial sector resilience. AI-driven systems in the financial sector are subject to both AI-specific risk management rules (AI Act) and financial sector resilience requirements (DORA). Financial firms using AI-powered trading, fraud detection, risk assessment, and cybersecurity tools must comply with both legal acts.
The AI Act classifies certain AI systems as “high-risk”, including AI used in credit scoring, financial risk assessment, fraud detection, anti-money laundering (AML), automated trading, and investment decision-making. AI-driven financial tools must comply with AI Act obligations (transparency, bias mitigation, accuracy testing etc.) and DORA’s ICT risk management standards.
DORA requires financial institutions to assess third-party ICT service providers (including AI providers). Banks and insurers must ensure AI vendors comply with both the AI Act and DORA. For example, financial institutions outsourcing AI-based credit risk analysis must assess AI fairness and bias (AI Act), and AI cybersecurity and resilience (DORA). Again, this includes third-party AI risk and outsourcing.
Dual Compliance Burden is the regulatory challenge where companies and organizations must comply with two or more overlapping legal frameworks governing similar aspects of their operations. It arises when multiple laws impose similar but distinct compliance requirements, leading to increased costs, administrative complexity, and operational strain. Organizations should develop integrated risk management programs that simultaneously address the AI Act and the DORA requirements. They must do the same for compliance reporting.
6 February 2025 – Commission Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act).
The AI Act does not apply to all systems, but only to those systems that fulfil the definition of an ‘AI system’ within the meaning of Article 3(1) AI Act. The definition of an AI system is therefore key to understanding the scope of application of the AI Act.
Article 3 (1) of the AI Act defines an AI system as follows:
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
That definition comprises seven main elements:
(1) a machine-based system;
(2) that is
designed to operate with varying levels of autonomy;
(3) that may exhibit adaptiveness
after deployment;
(4) and that, for explicit or implicit objectives;
(5) infers, from the input
it receives, how to generate outputs
(6) such as predictions, content, recommendations,
or decisions
(7) that can influence physical or virtual environments.
4 February 2025 - Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act)
The AI Act follows a risk-based approach, classifying AI systems into four different risk categories:
(i) Unacceptable risk: AI systems posing unacceptable risks to fundamental rights and Union values are prohibited under Article 5 AI Act.
(ii) High risk: AI systems posing high risks to health, safety and fundamental rights are subject to a set of requirements and obligations. These systems are classified as ‘high-risk’ in accordance with Article 6 AI Act in conjunction with Annexes I and III AI Act.
(iii) Transparency risk: AI systems posing limited transparency risk are subject to transparency obligations under Article 50 AI Act.
(iv) Minimal to no risk: AI systems posing minimal to no risk are not regulated, but providers and deployers may voluntarily adhere to voluntary codes of conduct.
Article 5 prohibits the placing on the EU market, putting into service, or use of certain AI systems for manipulative, exploitative, social control or surveillance practices, which by their inherent nature violate fundamental rights and Union values.
Recital 28 of the AI Act clarifies that such practices are particularly harmful and abusive and should be prohibited because they contradict the Union values of respect for human dignity, freedom, equality, democracy, and the rule of law, as well as fundamental rights enshrined in the Charter of Fundamental Right of the European Union (‘the Charter’), including the right to non-discrimination (Article 21 Charter) and equality (Article 20), data protection (Article 8 Charter) and private and family life (Article 7 Charter), and the rights of the child (Article 24 Charter).
The prohibitions in Article 5 AI Act also aim to uphold the right to freedom of expression and information (Article 11 Charter), freedom of assembly and of association (Article 12 Charter), freedom of thought, conscience and religion (Article 10 Charter), the right to an effective remedy and fair trial (Article 47 Charter), and the presumption of innocence and the right of defence (Article 48 Charter).
05 September 2024 – The Commission has signed the Council of Europe Framework Convention on Artificial Intelligence (AI) on behalf of the EU.
The Convention is the first legally binding international agreement on AI and it is fully in line with the EU AI Act, the first comprehensive AI regulation in the world.
The Convention provides for a common approach to ensure that AI systems are compatible with human rights, democracy and the rule of law, while enabling innovation and trust.
It includes a number of key concepts from the EU AI Act such as a risk-based approach, transparency along the value chain of AI systems and AI-generated content, detailed documentation obligations for AI systems identified as high-risk, and risk management obligations with the possibility to introduce bans for AI systems considered a clear threat to fundamental rights.
Next steps: This signature expresses the EU’s intention to become a Party to the Convention. Following this the European Commission will prepare a proposal for a Council decision to conclude the Convention. The European Parliament should also give its consent.
July 12, 2024 – The Artificial Intelligence Act of the EU was published in the Official Journal of the European Union and will enter into force on August 1, 2024.
Note: The Act will “enter into force on August 1, 2024”, means that the Act will become part of the legal system of the EU on August 1, 2024, so after this date the Act is legally binding and applicable. This date (August 1, 2024) is not a deadline. You can find the deadlines below.
Deadlines
1. The Artificial Intelligence Act shall apply from 2 August 2026, with the following exceptions (taking into account the unacceptable risk associated with the use of AI in certain ways).
2. Chapters I and II (general provisions and prohibited AI practices) shall apply from 2 February 2025.
3. Chapter III Section 4 (Notifying authorities and notified bodies), Chapter V (General Purpose AI models), Chapter VII (Governance), Chapter XII (Penalties) and Article 78 (Confidentiality) shall apply from 2 August 2025, with the exception of Article 101 (Fines for providers of general-purpose AI models).
4. Article 6(1) and the corresponding obligations in this Regulation shall apply from 2 August 2027.
5. According to Article 77 (Powers of authorities protecting fundamental rights), by 2 November 2024, each Member State shall identify the public authorities or bodies “which supervise or enforce the respect of obligations under Union law protecting fundamental rights, including the right to non-discrimination, in relation to the use of high-risk AI systems”.
6. The European Commission shall, no later than 2 February 2026, provide guidelines specifying the practical implementation of Article 6 (Classification rules for high-risk AI systems) in line with Article 96 (Guidelines from the Commission on the implementation of this Regulation) together with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.
19 April 2024 - CORRIGENDUM to the position of the European Parliament adopted at first reading on 13 March 2024
Note: A corrigendum is a document issued to correct errors in a document or publication that has already been issued.
The amended text, 19 April 2024
13 March 2024 - The European Parliament approved the Artificial Intelligence Act.
The regulation, agreed in negotiations with member states in December 2023, was endorsed by MEPs with 523 votes in favour, 46 against and 49 abstentions.
It aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact.
The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.
The use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations.
“Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorisation being linked to a criminal offence.
Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law).
Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections). Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight.
Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.
What is next: The regulation is still subject to a final lawyer-linguist check and is expected to be finally adopted before the end of the legislature (through the so-called corrigendum procedure). The law also needs to be formally endorsed by the Council.
February 2, 2024 - EU Member States unanimously endorsed the political agreement.
According to Thierry Breton, European Commissioner for Internal Market:
"We reached two important milestones in our endeavour to turn Europe into the global hub for trustworthy AI:
- Today, EU Member States unanimously endorsed the political agreement that we reached in December on the AI Act. The agreement resulted in a balanced and futureproof text, promoting trust and innovation in trustworthy AI.
- Last week, we adopted a wide range of measures to support Europe’s AI start-ups, complementing the regulatory framework.
Both milestones are equally important for European innovators in AI. They reflect our comprehensive approach to AI: promoting both trust and excellence in AI.
Our vision: a thriving European ecosystem of AI start-ups with talented researchers and engineers, developing large language models in all European languages, based on large amounts of easily accessible high-quality data, training them on the world’s fastest supercomputers, and working with industrial partners to turn them into innovative applications, with access to a large Single Market of 450 million people."
December 9, 2023 - The Council and Parliament reach a provisional agreement on the Artificial Intelligence Act.
Compared to the initial Commission proposal, the main new elements of the provisional agreement can be summarised as follows:
1. Rules on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems
2. A revised system of governance with some enforcement powers at EU level extension of the list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards
3. Better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.
Do we have the final text of the Artificial Intelligence Act?
No. Following the provisional agreement, work will continue at technical level in the coming weeks to finalise the details of the new regulation. The presidency will submit the compromise text to the member states’ representatives for endorsement once this work has been concluded.
The agreed text will have to be formally adopted by both Parliament and Council to become EU law.
The Artificial Intelligence Act will become law 20 days after its publication in the Official Journal of the European Union (the official publication for EU legal acts, other acts and official information from EU institutions, bodies, offices and agencies). In our opinion this will happen during the summer of 2024.
June 14, 2023 - The European Parliament has approved its negotiating position on the proposed Artificial Intelligence Act.
The European Parliament adopted its negotiating position with 499 votes in favor, 28 against, and 93 abstentions. It also amended the list of intrusive and discriminatory uses of AI systems. The list now includes:
- “Real-time” remote biometric identification systems in publicly accessible spaces;
- “Post” remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorization;
- Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
- Predictive policing systems (based on profiling, location or past criminal behaviour);
- Emotion recognition systems in law enforcement, border management, workplace, and educational institutions; and
- Indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and right to privacy).
Do we have the final text of the Artificial Intelligence Act?
No, this is not the final text.
What is next?
The Parliament will negotiate with the EU Council and the European Commission, in the trilogue process. The aim of a trilogue is to reach a provisional agreement on a legislative proposal that is acceptable to both the Parliament and the Council, the co-legislators. The Commission acts as a mediator, facilitating an agreement between the co-legislators. This provisional agreement must then be adopted by each of those institutions’ formal procedures.
25 November 2022 - The Council of the EU approved a compromise version of the proposed Artificial Intelligence Act.
There are still disagreements in the definition of the AI systems. The Council believes that the definition must not include certain types of existing software. There are also difficulties in the definition of autonomy.
Prohibited AI practices - the text of the proposed Artificial Intelligence Act now considers prohibited AI practices the use of AI for social scoring from private actors. Also, AI systems that exploit the vulnerabilities of a specific group of persons, including persons who are vulnerable due to their social or economic situation.
What about the prohibition of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities? The text of the proposed Artificial Intelligence Act clarifies that such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should be exceptionally allowed to use such systems.
Next step: The European Parliament is scheduled to vote by end of March 2023. The final EU Artificial Intelligence Act is expected to be adopted near the end of 2023.
Article 1, Subject matter.
This Regulation lays down:
(a1) harmonised rules for the placing on the market, the putting into service and the use of artificial intelligence systems (‘AI systems’) in the Union;
(a2) prohibitions of certain artificial intelligence practices;
(b) specific requirements for high-risk AI systems and obligations for operators of such systems;
(c) harmonised transparency rules for certain AI systems;
(d) rules on market monitoring, market surveillance and governance;
(e) measures in support of innovation.
Article 2, Scope.
1. This Regulation applies to:
(a) providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are physically present or established within the Union or in a third country;
(b) users of AI systems who are physically present or established within the Union;
(c) providers and users of AI systems who are physically present or established in a third country, where the output produced by the system is used in the Union;
(d) importers and distributors of AI systems;
(e) product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
(f) authorised representatives of providers, which are established in the Union;
A new Title IA has been added to account for situations where AI systems can be used for many different purposes (general purpose AI), and where there may be circumstances where general purpose AI technology gets integrated into another system which may become high-risk. The compromise text specifies in Article 4b(1) that certain requirements for high risk AI systems would also apply to general purpose AI systems.
However, instead of direct application of these requirements, an implementing act would specify how they should be applied in relation to general purpose AI systems, based on a consultation and detailed impact assessment and taking into account specific characteristics of these systems and related value chain, technical feasibility and market and technological developments. The use of an implementing act will ensure that the Member States will be properly involved and will keep the final say on how the requirements will be applied in this context.
Moreover, the compromise text of Article 4b(5) also includes a possibility to adopt further implementing acts which would lay down the modalities of cooperation between providers of general purpose AI systems and other providers intending to put into service or place such systems on the Union market as high-risk AI systems, in particular as regards the provision of information.
In Article 2 an explicit reference has been made to the exclusion of national security, defence and military purposes from the scope of the AI Act. Similarly, it has been clarified that the AI Act should not apply to AI systems and their outputs used for the sole purpose of research and development and to obligations of people using AI for non-professional purposes, which would fall outside the scope of the AI Act, except for the transparency obligations.
In order to take into account the particular specificities of law enforcement authorities, a number of changes has been made to provisions relating to the use of AI systems for law enforcement purposes. Notably, some of the related definitions in Article 3, such as ‘remote biometric identification system’ and ‘real-time remote biometric identification system’, have been fine-tuned in order to clarify what situations would fall under the related prohibition and high-risk use case and what situations would not.
The compromise proposal also contains other modifications that are, subject to appropriate safeguards, meant to ensure appropriate level of flexibility in the use of high-risk AI systems by law enforcement authorities or reflect on the need to respect the confidentiality of sensitive operational data in relation to their activities.
In order to simplify the compliance framework for the AI Act, the compromise text contains a number of clarifications and simplifications to the provisions on the conformity assessment procedures. The provisions related to market surveillance have also been clarified and simplified in order to make them more effective and easier to implement, taking into account the need for a proportionate approach in this respect. Moreover, Article 41 has been thoroughly reviewed in order to limit the Commission’s discretion with regard to the adoption of implementing acts establishing common technical specifications for the requirements for high-risk AI systems and general purpose AI systems.
The compromise text also substantially modifies the provisions concerning the AI Board ('the Board'), with the objectives to ensure its greater autonomy and to strengthen its role in the governance architecture for the AIA. In this context, Articles 56 and 58 have been revised in order to strengthen the role of the Board in such a way that it should be in a better position to provide support to the Member States in the implementation and enforcement of the AI Act. More specifically, the tasks of the Board have been extended and its composition has been specified.
In order to ensure the involvement of the stakeholders in relation to all issues related to the implementation of the AI Act, including the preparation of implementing and delegates acts, a new requirement has been added for the Board to create a permanent subgroup serving as a platform for a wide range of stakeholders. Two other standing subgroups for market surveillance authorities and notifying authorities should also be established to reinforce the consistency of governance and enforcement of the AI Act across the Union.
With the objective to create a legal framework that is more innovation-friendly and in order to promote evidence-based regulatory learning, the provisions concerning measures in support of innovation in Article 53 have been substantially modified in the compromise text. Notably, it has been clarified that AI regulatory sandboxes, which are supposed to establish a controlled environment for the development, testing and validation of innovative AI systems under the direct supervision and guidance by the national competent authorities, should also allow for testing of innovative AI systems in real world conditions.
Furthermore, new provisions in Articles 54a and 54b have been added allowing unsupervised real world testing of AI systems, under specific conditions and safeguards. In both cases the compromise text clarifies how these new rules are to be interpreted in relation to other existing, sectoral legislation on regulatory sandboxes.
15 July 2022 - Council of EU: Compromise text on the AI Act.
The Commission adopted the proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act, AIA) on 21 April 2021.
In order to address concerns of many Member States that consider that the current definition of an AI system is ambiguous and too broad, and that it fails to provide sufficiently clear criteria for distinguishing AI from more classical software systems, the Czech Presidency has proposed a new version of the definition in Article 3(1), which narrows it down to systems developed through machine learning techniques and knowledge-based approaches.
The basic concepts from the OECD definition of an AI system have been kept, and additionally the concept of autonomy has been included in the definition, as per the specific request of a number of delegations. Furthermore, Recital 6 has been updated accordingly.
The harmonised rules laid down in this Regulation should apply across sectors without prejudice to existing Union law, and in particular without prejudice to Union law on data protection, consumer protection, product safety and employment. This Regulation is intended to regulate AI systems that are to be placed on the market and put into service in the Union and it should complement such existing Union law.
Machine learning approaches focus on the development of systems capable of learning from data to solve an application problem without being explicitly programmed with a set of step-by-step instructions from input to output. Learning refers to the computational process of optimizing from data the parameters of the model, which is a mathematical construct generating an output based on input data.
The range of problems addressed by machine learning typically involves tasks for which other approaches fail, either because there is no suitable formalisation of the problem, or because the resolution of the problem is intractable with non-learning approaches. Machine learning approaches include for instance supervised, unsupervised and reinforcement learning, using a variety of methods including deep learning, statistical techniques for learning and inference (including Bayesian estimation) and search and optimisation methods.
Logic- and knowledge based approaches focus on the development of systems with logical reasoning capabilities on knowledge to solve an application problem. Such systems typically involve a knowledge base and an inference engine that generates outputs by reasoning on the knowledge base.
The knowledge base, which is usually encoded by human experts, represents entities and logical relationships relevant for the application problem through formalisms based on rules, ontologies, or knowledge graphs. The inference engine acts on the knowledge base and extracts new information through operations such as sorting, searching, matching or chaining. Logic- and knowledge based approaches include for instance knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning, expert systems and search and optimisation methods.
‘Artificial intelligence system’ (AI system) means a system that is designed to operate with a certain level of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of human-defined objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions , influencing the environments with which the AI system interacts.
21 April 2021 - Proposal for a Regulation laying down harmonised rules on artificial intelligence.
The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, marketing and use of artificial intelligence in conformity with Union values. This Regulation pursues a number of overriding reasons of public interest, such as a high level of protection of health, safety and fundamental rights, and it ensures the free movement of AI-based goods and services cross-border, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.
Artificial intelligence systems (AI systems) can be easily deployed in multiple sectors of the economy and society, including cross border, and circulate throughout the Union. Certain Member States have already explored the adoption of national rules to ensure that artificial intelligence is safe and is developed and used in compliance with fundamental rights obligations. Differing national rules may lead to fragmentation of the internal market and decrease legal certainty for operators that develop or use AI systems.
A consistent and high level of protection throughout the Union should therefore be ensured, while divergences hampering the free circulation of AI systems and related products and services within the internal market should be prevented, by laying down uniform obligations for operators and guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the internal market based on Article 114 of the Treaty on the Functioning of the European Union (TFEU).
To the extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of personal data concerning restrictions of the use of AI systems for ‘real-time’ remote biometric identification in publicly accessible spaces for the purpose of law enforcement, it is appropriate to base this Regulation, in as far as those specific rules are concerned, on Article 16 of the TFEU. In light of those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection Board.
Artificial intelligence is a fast evolving family of technologies that can contribute to a wide array of economic and societal benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations and resource allocation, and personalising digital solutions available for individuals and organisations, the use of artificial intelligence can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes, for example in healthcare, farming, education and training, infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy efficiency, and climate change mitigation and adaptation.
At the same time, depending on the circumstances regarding its specific application and use, artificial intelligence may generate risks and cause harm to public interests and rights that are protected by Union law. Such harm might be material or immaterial.
A Union legal framework laying down harmonised rules on artificial intelligence is therefore needed to foster the development, use and uptake of artificial intelligence in the internal market that at the same time meets a high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law.
To achieve that objective, rules regulating the placing on the market and putting into service of certain AI systems should be laid down, thus ensuring the smooth functioning of the internal market and allowing those systems to benefit from the principle of free movement of goods and services. By laying down those rules, this Regulation supports the objective of the Union of being a global leader in the development of secure, trustworthy and ethical artificial intelligence, as stated by the European Council, and it ensures the protection of ethical principles, as specifically requested by the European Parliament.
8.4.2019 - European Commission, Building Trust in Human-Centric Artificial Intelligence.
The European AI strategy and the coordinated plan make clear that trust is a prerequisite to ensure a human-centric approach to AI: AI is not an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being.
To achieve this, the trustworthiness of AI should be ensured. The values on which our societies are based need to be fully integrated in the way AI develops. The Union is founded on the values of respect for human dignity, freedom, democracy, equality, the rule of law and respect for human rights, including the rights of persons belonging to minorities. These values are common to the societies of all Member States in which pluralism, non-discrimination, tolerance, justice, solidarity and equality prevail. In addition, the EU Charter of Fundamental Rights brings together – in a single text – the personal, civic, political, economic and social rights enjoyed by people within the EU.
The EU has a strong regulatory framework that will set the global standard for humancentric AI. The General Data Protection Regulation ensures a high standard of protection of personal data, and requires the implementation of measures to ensure data protection by design and by default. The Free Flow of Non-Personal Data Regulation removes barriers to the free movement of non-personal data and ensures the processing of all categories of data anywhere in Europe. The recently adopted Cybersecurity Act will help to strengthen trust in the online world, and the proposed ePrivacy Regulation also aims at this goal.
Nevertheless, AI brings new challenges because it enables machines to “learn” and to take and implement decisions without human intervention. Before long, this kind of functionality will become standard in many types of goods and services, from smart phones to automated cars, robots and online applications. Yet, decisions taken by algorithms could result from data that is incomplete and therefore not reliable, they may be tampered with by cyber-attackers, or they may be biased or simply mistaken. Unreflectively applying the technology as it develops would therefore lead to problematic outcomes as well as reluctance by citizens to accept or use it.
Instead, AI technology should be developed in a way that puts people at its centre and is thus worthy of the public’s trust. This implies that AI applications should not only be consistent with the law, but also adhere to ethical principles and ensure that their implementations avoid unintended harm. Diversity in terms of gender, racial or ethnic origin, religion or belief, disability and age should be ensured at every stage of AI development. AI applications should empower citizens and respect their fundamental rights.
They should aim to enhance people’s abilities, not replace them, and also enable access by people with disabilities. Therefore, there is a need for ethics guidelines that build on the existing regulatory framework and that should be applied by developers, suppliers and users of AI in the internal market, establishing an ethical level playing field across all Member States. This is why the Commission has set up a high-level expert group on AI representing a wide range of stakeholders and has tasked it with drafting AI ethics guidelines as well as preparing a set of recommendations for broader AI policy. At the same time, the European AI Alliance, an open multi-stakeholder platform with over 2700 members, was set up to provide broader input for the work of the AI high-level expert group.
7.12.2018 - European Commission, Coordinated Plan on Artificial Intelligence.
This plan brings together a set of concrete and complementary actions at EU, national and regional level in view of:
- Boosting investments and reinforcing excellence in AI technologies and applications which are trustworthy and “ethical and secure by design”. Investments shall take place in a stable regulatory context which enables experimentation and supports disruptive innovation across the EU, ensuring the widest and best use of AI by the European economy and society.
- Building on Europe’s strengths, to develop and implement in partnership with industry and Member States shared agendas for industry-academia collaborative Research and Development (R&D) and innovation.
- Adapting learning and skilling programmes and systems to prepare Europe’s society and its future generations for AI.
- Building up essential capacities in Europe underpinning AI such as data spaces and world-class reference sites for testing and experimentation.
- Making public administrations in Europe frontrunners in the use of AI.
- Implementing, on the basis of expert work, clear ethics guidelines for the development and the use of AI in full respect of fundamental rights, with a view to set global ethical standards and be a world leader in ethical, trusted AI.
- Where needed, reviewing the existing national and European legal framework to better adapt them to specific challenges.
This digital transformation requires in many cases a significant upgrading of the currently available infrastructure. The effective implementation of AI will require the completion of the Digital Single Market and its regulatory framework including the swift adoption of the Commission proposal for a European Cybersecurity Industrial, Technology and Research Competence Centre and the Network of National Coordination Centres, reinforced connectivity through spectrum coordination, very fast 5G mobile networks and optical fibres, next generation clouds, as well as satellite technologies.
High-performance computing and AI will increasingly intertwine as we transit to a future using new computing, storage and communication technologies. Furthermore, infrastructures should be both accessible and affordable to ensure an inclusive AI adoption across Europe, particularly by small and medium-sized enterprises (SMEs).
Industry, and in particular small and young companies, will need to be in a position to be aware and able to integrate these technologies in new products, services and related production processes and technologies, including by upskilling and reskilling their workforce. Standardisation will also be essential for the development of AI in the Digital Single Market, helping notably to ensure interoperability.
June 2018 - The European AI Alliance.
The European AI Alliance is an initiative of the European Commission to establish an open policy dialogue on Artificial Intelligence. Since its launch in 2018, the AI Alliance has engaged around 6000 stakeholders through regular events, public consultations and online forum exchanges.
The AI Alliance was initially created to steer the work of the High-Level Expert Group on Artificial Intelligence (AI HLEG).
The group’s Ethics Guidelines as well as its Policy and Investment Recommendations were important documents that shaped the concept of Trustworthy AI, contributing to the Commission’s approach to AI. This work was based on a mix of expert input and community driven feedback.
25 April 2018 - The European Commission outlines a European approach to boost investment and set ethical guidelines.
The European Commission is presenting a series of measures to put artificial intelligence (AI) at the service of Europeans and boost Europe's competitiveness in this field.
The Commission is proposing a three-pronged approach to increase public and private investment in AI, prepare for socio-economic changes, and ensure an appropriate ethical and legal framework. This follows European leaders' call for a European approach on AI.
Europe has world-class researchers, laboratories and start-ups in the field of AI. The EU is also strong in robotics and has world-leading transport, healthcare and manufacturing sectors that should adopt AI to remain competitive. However, fierce international competition requires coordinated action for the EU to be at the forefront of AI development.
The EU (public and private sectors) should increase investments in AI research and innovation by at least €20 billion between now and the end of 2020. To support these efforts, the Commission is increasing its investment to €1.5 billion for the period 2018-2020 under the Horizon 2020 research and innovation programme. This investment is expected to trigger an additional €2.5 billion of funding from existing public-private partnerships, for example on big data and robotics.
It will support the development of AI in key sectors, from transport to health; it will connect and strengthen AI research centres across Europe, and encourage testing and experimentation. The Commission will also support the development of an "AI-on-demand platform" that will provide access to relevant AI resources in the EU for all users.
Additionally, the European Fund for Strategic Investments will be mobilised to provide companies and start-ups with additional support to invest in AI. With the European Fund for Strategic Investments, the aim is to mobilise more than €500 million in total investments by 2020 across a range of key sectors.
The Commission will also continue to create an environment that stimulates investment. As data is the raw material for most AI technologies, the Commission is proposing legislation to open up more data for re-use and measures to make data sharing easier. This covers data from public utilities and the environment as well as research and health data.
With the dawn of artificial intelligence, many jobs will be created, but others will disappear and most will be transformed. This is why the Commission is encouraging Member States to modernise their education and training systems and support labour market transitions, building on the European Pillar of Social Rights. The Commission will support business-education partnerships to attract and keep more AI talent in Europe, set up dedicated training schemes with financial support from the European Social Fund, and support digital skills, competencies in science, technology, engineering and mathematics (STEM), entrepreneurship and creativity. Proposals under the EU's next multiannual financial framework (2021-2027) will include strengthened support for training in advanced digital skills, including AI-specific expertise.
As with any transformative technology, artificial intelligence may raise new ethical and legal questions, related to liability or potentially biased decision-making. New technologies should not mean new values. The Commission will present ethical guidelines on AI development by the end of 2018, based on the EU's Charter of Fundamental Rights, taking into account principles such as data protection and transparency, and building on the work of the European Group on Ethics in Science and New Technologies.
To help develop these guidelines, the Commission will bring together all relevant stakeholders in a European AI Alliance. By mid-2019 the Commission will also issue guidance on the interpretation of the Product Liability Directive in the light of technological developments, to ensure legal clarity for consumers and producers in case of defective products.
9 March 2018 - The European Commission kicks off work on marrying cutting-edge technology and ethical standards.
The European Commission is setting up a group on artificial intelligence to gather expert input and rally a broad alliance of diverse stakeholders.
The expert group will draw up a proposal for guidelines on AI ethics, building on today's statement by the European Group on Ethics in Science and New Technologies.
From better healthcare to safer transport and more sustainable farming, artificial intelligence (AI) can bring major benefits to our society and economy. And yet, questions related to the impact of AI on the future of work and existing legislation are raised. This calls for a wide, open and inclusive discussion on how to use and develop artificial intelligence both successfully and ethically sound.
Objectives of the High-Level Expert Group on Artificial Intelligence.
The general objective of the group shall be to support the implementation of the European strategy on AI. This will include the elaboration of recommendations on future AI-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges.
In particular, the group will be tasked to:
1. Advise the Commission on next steps addressing AI-related mid to long-term challenges and opportunities through recommendations which will feed into the policy development process, the legislative evaluation process and the development of a next-generation digital strategy.
2. Support the Commission on further engagement and outreach mechanisms to interact with a broader set of stakeholders in the context of the AI Alliance, share information and gather their input on the group's and the Commission's work.
3. Propose to the Commission AI ethics guidelines, covering issues such as fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights, including privacy and personal data protection, dignity, consumer protection and non-discrimination. These guidelines will build on the work of the European Group on Ethics in Science and New Technologies (the European Group on Ethics in Science and New Technologies (EGE) is an independent advisory body established by the President of the European Commission) and of the EU Fundamental Rights Agency in this area (the Fundamental Rights Agency is carrying out an assessment of the current challenges faced by producers and users of new technology with respect to fundamental rights compliance (project "Big Data and Fundamental Rights").