(1) ‘AI system’ means a machine-based system designed to operate with varying levels of
autonomy, that may exhibit adaptiveness after deployment and that, for explicit or
implicit objectives, infers, from the input it receives, how to generate outputs such as
predictions, content, recommendations, or decisions that can influence physical or virtual
environments;
(2) ‘risk’ means the combination of the probability of an occurrence of harm and the
severity of that harm;
(3) ‘provider’ means a natural or legal person, public authority, agency or other body that
develops an AI system or a general-purpose AI model or that has an AI system or a
general-purpose AI model developed and places it on the market or puts the AI system
into service under its own name or trademark, whether for payment or free of charge;
(4) ‘deployer’ means a natural or legal person, public authority, agency or other body using an
AI system under its authority except where the AI system is used in the course of a
personal non-professional activity;
(5) ‘authorised representative’ means a natural or legal person located or established in the
Union who has received and accepted a written mandate from a provider of an AI system
or a general-purpose AI model to, respectively, perform and carry out on its behalf the
obligations and procedures established by this Regulation;
(6) ‘importer’ means a natural or legal person located or established in the Union that places
on the market an AI system that bears the name or trademark of a natural or legal person
established in a third country;
(7) ‘distributor’ means a natural or legal person in the supply chain, other than the provider or
the importer, that makes an AI system available on the Union market;
(8) ‘operator’ means a provider, product manufacturer, deployer, authorised representative,
importer or distributor;
(9) ‘placing on the market’ means the first making available of an AI system or a general-
purpose AI model on the Union market;
(10) ‘making available on the market’ means the supply of an AI system or a general-purpose
AI model for distribution or use on the Union market in the course of a commercial
activity, whether in return for payment or free of charge;
(11) ‘putting into service’ means the supply of an AI system by the provider for first use
directly to the deployer or for own use in the Union for its intended purpose;
(12) ‘intended purpose’ means the use for which an AI system is intended by the provider,
including the specific context and conditions of use, as specified in the information
supplied by the provider in the instructions for use, promotional or sales materials and
statements, as well as in the technical documentation;
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in
accordance with its intended purpose, but which may result from reasonably foreseeable
human behaviour or interaction with other systems, including other AI systems;
(14) ‘safety component’ means a component of a product or of a system which fulfils a safety
function for that product or system, or the failure or malfunctioning of which endangers the
health and safety of persons or property;
(15) ‘instructions for use’ means the information provided by the provider to inform the
deployer of in particular an AI system’s intended purpose and proper use;
(16) ‘recall of an AI system’ means any measure aiming to achieve the return to the provider or
taking out of service or disabling the use of an AI system made available to deployers;
(17) ‘withdrawal of an AI system’ means any measure aiming to prevent an AI system in the
supply chain being made available on the market;
(18) ‘performance of an AI system’ means the ability of an AI system to achieve its intended
purpose;
(19) ‘notifying authority’ means the national authority responsible for setting up and carrying
out the necessary procedures for the assessment, designation and notification of conformity
assessment bodies and for their monitoring;
(20) ‘conformity assessment’ means the process of demonstrating whether the requirements set
out in Chapter II, Section 2 relating to a high-risk AI system have been fulfilled;
(21) ‘conformity assessment body’ means a body that performs third-party conformity
assessment activities, including testing, certification and inspection;
(22) ‘notified body’ means a conformity assessment body notified in accordance with this
Regulation and other relevant Union harmonisation legislation as listed in Section B of
Annex I;
(23) ‘substantial modification’ means a change to an AI system after its placing on the market
or putting into service which is not foreseen or planned in the initial conformity
assessment carried out by the provider and as a result of which the compliance of the AI
system with the requirements set out in Chapter II, Section 2 is affected or results in a
modification to the intended purpose for which the AI system has been assessed;
(24) ‘CE marking’ means a marking by which a provider indicates that an AI system is in
conformity with the requirements set out in Chapter II, Section 2 and other applicable
Union harmonisation legislation listed in Annex I, providing for its affixing;
(25) ‘post-market monitoring system’ means all activities carried out by providers of AI
systems to collect and review experience gained from the use of AI systems they place
on the market or put into service for the purpose of identifying any need to immediately
apply any necessary corrective or preventive actions;
(26) ‘market surveillance authority’ means the national authority carrying out the activities and
taking the measures pursuant to Regulation (EU) 2019/1020;
(27) ‘harmonised standard’ means a harmonised standard as defined in Article 2(1), point (c), of
Regulation (EU) No 1025/2012;
(28) ‘common specification’ means a set of technical specifications as defined in Article 2,
point (4) of Regulation (EU) No 1025/2012, providing means to comply with certain
requirements established under this Regulation;
(29) ‘training data’ means data used for training an AI system through fitting its learnable
parameters;
(30) ‘validation data’ means data used for providing an evaluation of the trained AI system and
for tuning its non-learnable parameters and its learning process in order, inter alia, to
prevent underfitting or overfitting;
(31) ‘validation data set’ means a separate data set or part of the training data set, either as a
fixed or variable split;
(32) ‘testing data’ means data used for providing an independent evaluation of the AI system
in order to confirm the expected performance of that system before its placing on the
market or putting into service;
(33) ‘input data’ means data provided to or directly acquired by an AI system on the basis of
which the system produces an output;
(34) ‘biometric data’ means personal data resulting from specific technical processing relating
to the physical, physiological or behavioural characteristics of a natural person, such as
facial images or dactyloscopic data;
(35) ‘biometric identification’ means the automated recognition of physical, physiological,
behavioural, or psychological human features for the purpose of establishing the identity
of a natural person by comparing biometric data of that individual to biometric data of
individuals stored in a database;
(36) ‘biometric verification’ means the automated, one-to-one verification, including
authentication, of the identity of natural persons by comparing their biometric data to
previously provided biometric data;
(37) ‘special categories of personal data’ means the categories of personal data referred to in
Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and
Article 10(1) of Regulation (EU) 2018/1725;
(38) ‘sensitive operational data’ means operational data related to activities of prevention,
detection, investigation or prosecution of criminal offences, the disclosure of which
could jeopardise the integrity of criminal proceedings;
(39) ‘emotion recognition system’ means an AI system for the purpose of identifying or
inferring emotions or intentions of natural persons on the basis of their biometric data;
(40) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural
persons to specific categories on the basis of their biometric data, unless it is ancillary to
another commercial service and strictly necessary for objective technical reasons;
(41) ‘remote biometric identification system’ means an AI system for the purpose of identifying
natural persons, without their active involvement, typically at a distance through the
comparison of a person’s biometric data with the biometric data contained in a reference
database;
(42) ‘real-time remote biometric identification system’ means a remote biometric identification
system whereby the capturing of biometric data, the comparison and the identification all
occur without a significant delay and comprises not only instant identification, but also
limited short delays in order to avoid circumvention;
(43) ‘post remote biometric identification system’ means a remote biometric identification
system other than a real-time remote biometric identification system;
(44) ‘publicly accessible space’ means any publicly or privately owned physical place
accessible to an undetermined number of natural persons, regardless of whether certain
conditions for access may apply, and regardless of the potential capacity restrictions;
(45) ‘law enforcement authority’ means: (a) any public authority competent for the prevention, investigation, detection or
prosecution of criminal offences or the execution of criminal penalties, including the
safeguarding against and the prevention of threats to public security; or
(b) any other body or entity entrusted by Member State law to exercise public authority
and public powers for the purposes of the prevention, investigation, detection or
prosecution of criminal offences or the execution of criminal penalties, including the
safeguarding against and the prevention of threats to public security;
(46) ‘law enforcement’ means activities carried out by law enforcement authorities or on their
behalf for the prevention, investigation, detection or prosecution of criminal offences or
the execution of criminal penalties, including safeguarding against and preventing threats
to public security;
(47) ‘AI Office’ means the Commission’s function of contributing to the implementation,
monitoring and supervision of AI systems and AI governance carried out by the
European Artificial Intelligence Office established by Commission Decision of
24.1.2024; references in this Regulation to the AI Office shall be construed as references
to the Commission;
(48) ‘national competent authority’ means a notifying authority or a market surveillance
authority;
(49) ‘serious incident’ means an incident or malfunctioning of an AI system that directly or
indirectly leads to any of the following: (a) the death of a person, or serious harm to a person’s health;
(b) a serious and irreversible disruption of the management or operation of critical
infrastructure.
(c) the infringement of obligations under Union law intended to protect fundamental
rights;
(d) serious harm to property or the environment;
(50) ‘personal data’ means personal data as defined in Article 4, point (1), of Regulation
(EU) 2016/679;
(51) ‘non-personal data’ means data other than personal data as defined in Article 4, point
(1), of Regulation (EU) 2016/679;
(52) ‘profiling’ means profiling as defined in Article 4, point (4), of Regulation (EU)
2016/679 or, in the case of law enforcement authorities, as defined in Article 3, point (4)
of Directive (EU) 2016/680 or, in the case of Union institutions, bodies, offices or
agencies, as defined in Article 3, point (5) of Regulation (EU) 2018/1725;
(53) ‘real-world testing plan’ means a document that describes the objectives, methodology,
geographical, population and temporal scope, monitoring, organisation and conduct of
testing in real-world conditions;
(54) ‘sandbox plan’ means a document agreed between the participating provider and the
competent authority describing the objectives, conditions, timeframe, methodology and
requirements for the activities carried out within the sandbox;
(55) ‘AI regulatory sandbox’ means a controlled framework set up by a competent authority
which offers providers or prospective providers of AI systems the possibility to develop,
train, validate and test, where appropriate in real-world conditions, an innovative AI
system, pursuant to a sandbox plan for a limited time under regulatory supervision;
(56) ‘AI literacy’ means skills, knowledge and understanding that allows providers, deployers
and affected persons, taking into account their respective rights and obligations in the
context of this Regulation, to make an informed deployment of AI systems, as well as to
gain awareness about the opportunities and risks of AI and possible harm it can cause;
(57) ‘testing in real-world conditions’ means the temporary testing of an AI system for its
intended purpose in real-world conditions outside a laboratory or otherwise simulated
environment, with a view to gathering reliable and robust data and to assessing and
verifying the conformity of the AI system with the requirements of this Regulation and it
is not considered to be placing the AI system on the market or putting it into service
within the meaning of this Regulation, provided that all the conditions laid down in
Article 57 or 60 are fulfilled;
(58) ‘subject’, for the purpose of real-world testing, means a natural person who participates
in testing in real-world conditions;
(59) ‘informed consent’ means a subject's freely given, specific, unambiguous and voluntary
expression of his or her willingness to participate in a particular testing in real-world
conditions, after having been informed of all aspects of the testing that are relevant to
the subject's decision to participate;
(60) ‘deep fake’ means AI-generated or manipulated image, audio or video content that
resembles existing persons, objects, places or other entities or events and would falsely
appear to a person to be authentic or truthful;
(61) ‘widespread infringement’ means any act or omission contrary to Union law protecting
the interest of individuals, which: (a) has harmed or is likely to harm the collective interests of individuals residing in at
least two Member States other than the Member State in which: (i) the act or omission originated or took place;
(ii) the provider concerned, or, where applicable, its authorised representative is
located or established; or
(iii) the deployer is established, when the infringement is committed by the
deployer;
(b) has caused, causes or is likely to cause harm to the collective interests of
individuals and has common features, including the same unlawful practice or the
same interest being infringed, and is occurring concurrently, committed by the
same operator, in at least three Member States;
(62) ‘critical infrastructure’ means critical infrastructure as defined in Article 2, point (4), of
Directive (EU) 2022/2557;
(63) ‘general-purpose AI model’ means an AI model, including where such an AI model is
trained with a large amount of data using self-supervision at scale, that displays
significant generality and is capable of competently performing a wide range of distinct
tasks regardless of the way the model is placed on the market and that can be integrated
into a variety of downstream systems or applications, except AI models that are used for
research, development or prototyping activities before they are released on the market;
(64) ‘high-impact capabilities’ means capabilities that match or exceed the capabilities
recorded in the most advanced general-purpose AI models;
(65) ‘systemic risk’ means a risk that is specific to the high-impact capabilities of general-
purpose AI models, having a significant impact on the Union market due to their reach,
or due to actual or reasonably foreseeable negative effects on public health, safety,
public security, fundamental rights, or the society as a whole, that can be propagated at
scale across the value chain;
(66) ‘general-purpose AI system’ means an AI system which is based on a general-purpose
AI model, that has the capability to serve a variety of purposes, both for direct use as
well as for integration in other AI systems;
(67) ‘floating-point operation’ or ‘FLOP’ means any mathematical operation or assignment
involving floating-point numbers, which are a subset of the real numbers typically
represented on computers by an integer of fixed precision scaled by an integer exponent
of a fixed base;
(68) ‘downstream provider’ means a provider of an AI system, including a general-purpose
AI system, which integrates an AI model, regardless of whether the model is provided by
themselves and vertically integrated or provided by another entity based on contractual
relations.