(1) An AI system is a machine-based system designed to operate with varying levels of
autonomy and that may exhibit adaptiveness after deployment and that, for explicit
or implicit objectives, infers, from the input it receives, how to generate outputs such
as predictions, content, recommendations, or decisions that can influence physical or
virtual environments. .
(1a) ‘risk’ means the combination of the probability of an occurrence of harm and the
severity of that harm;
(2) ‘provider’ means a natural or legal person, public authority, agency or other body
that develops an AI system or a general purpose AI model or that has an AI system
or a general purpose AI model developed and places them on the market or puts the
system into service under its own name or trademark, whether for payment or free of
charge;
(4) ‘deployer means any natural or legal person, public authority, agency or other body
using an AI system under its authority except where the AI system is used in the
course of a personal non-professional activity;
(5) ‘authorised representative’ means any natural or legal person located or established
in the Union who has received and accepted a written mandate from a provider of an
AI system or a general-purpose AI model to, respectively, perform and carry out on
its behalf the obligations and procedures established by this Regulation;
(6) ‘importer’ means any natural or legal person located or established in the Union that
places on the market an AI system that bears the name or trademark of a natural or
legal person established outside the Union;
(7) ‘distributor’ means any natural or legal person in the supply chain, other than the
provider or the importer, that makes an AI system available on the Union market;
(8) ‘operator’ means the provider, the product manufacturer, the deployer, the authorised
representative, the importer or the distributor;
(9) ‘placing on the market’ means the first making available of an AI system or a general
purpose AI model on the Union market;
(10) ‘making available on the market’ means any supply of an AI system or a general
purpose AI model for distribution or use on the Union market in the course of a
commercial activity, whether in return for payment or free of charge;
(11) ‘putting into service’ means the supply of an AI system for first use directly to the
deployer or for own use in the Union for its intended purpose;
(12) ‘intended purpose’ means the use for which an AI system is intended by the provider,
including the specific context and conditions of use, as specified in the information
supplied by the provider in the instructions for use, promotional or sales materials
and statements, as well as in the technical documentation;
(13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in
accordance with its intended purpose, but which may result from reasonably
foreseeable human behaviour or interaction with other systems, including other AI
systems;
(14) ‘safety component of a product or system’ means a component of a product or of a
system which fulfils a safety function for that product or system, or the failure or
malfunctioning of which endangers the health and safety of persons or property;
(15) ‘instructions for use’ means the information provided by the provider to inform the
user of in particular an AI system’s intended purpose and proper use;
(16) ‘recall of an AI system’ means any measure aimed at achieving the return to the
provider or taking it out of service or disabling the use of an AI system made
available to deployers;
(17) ‘withdrawal of an AI system’ means any measure aimed at preventing an AI system
in the supply chain being made available on the market;
(18) ‘performance of an AI system’ means the ability of an AI system to achieve its
intended purpose;
(19) ‘notifying authority’ means the national authority responsible for setting up and
carrying out the necessary procedures for the assessment, designation and
notification of conformity assessment bodies and for their monitoring;
(20) ‘conformity assessment’ means the process of demonstrating whether the
requirements set out in Title III, Chapter 2 of this Regulation relating to a high-risk
AI system have been fulfilled;
(21) ‘conformity assessment body’ means a body that performs third-party conformity
assessment activities, including testing, certification and inspection;
(22) ‘notified body’ means a conformity assessment body notified in accordance with this
Regulation and other relevant Union harmonisation legislation;
(23) ‘substantial modification’ means a change to the AI system after its placing on the
market or putting into service which is not foreseen or planned in the initial
conformity assessment by the provider and as a result of which the compliance of the
AI system with the requirements set out in Title III, Chapter 2 of this Regulation is
affected or results in a modification to the intended purpose for which the AI system
has been assessed;
(24) ‘CE marking of conformity’ (CE marking) means a marking by which a provider
indicates that an AI system is in conformity with the requirements set out in Title III,
Chapter 2 of this Regulation and other applicable Union legislation harmonising the
conditions for the marketing of products (‘Union harmonisation legislation’)
providing for its affixing;
(25) ‘post-market monitoring system’ means all activities carried out by providers of AI
systems to collect and review experience gained from the use of AI systems they
place on the market or put into service for the purpose of identifying any need to
immediately apply any necessary corrective or preventive actions;
(26) ‘market surveillance authority’ means the national authority carrying out the
activities and taking the measures pursuant to Regulation (EU) 2019/1020;
(27) ‘harmonised standard’ means a European standard as defined in Article 2(1)(c) of
Regulation (EU) No 1025/2012;
(28) ‘common specification’ means a set of technical specifications, as defined in point 4
of Article 2 of Regulation (EU) No 1025/2012 providing means to comply with
certain requirements established under this Regulation;
(29) ‘training data’ means data used for training an AI system through fitting its learnable
parameters;
(30) ‘validation data’ means data used for providing an evaluation of the trained AI
system and for tuning its non-learnable parameters and its learning process, among
other things, in order to prevent underfitting or overfitting; whereas the validation
dataset is a separate dataset or part of the training dataset, either as a fixed or variable
split;
(31) ‘testing data’ means data used for providing an independent evaluation of the AI
system in order to confirm the expected performance of that system before its placing
on the market or putting into service;
(32) ‘input data’ means data provided to or directly acquired by an AI system on the basis
of which the system produces an output;
(33) ‘biometric data’ means personal data resulting from specific technical processing
relating to the physical, physiological or behavioural characteristics of a natural
person, such as facial images or dactyloscopic data;
(33a) ‘biometric identification’ means the automated recognition of physical,
physiological, behavioural, and psychological human features for the purpose of
establishing an individual’s identity by comparing biometric data of that individual to
stored biometric data of individuals in a database;
(33c) ‘biometric verification’ means the automated verification of the identity of natural
persons by comparing biometric data of an individual to previously provided
biometric data (one-to-one verification, including authentication);
(33d) ‘special categories of personal data’ means the categories of personal data referred to
in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680
and Article 10(1) of Regulation (EU) 2018/1725;
(33e) ‘sensitive operational data’ means operational data related to activities of prevention,
detection, investigation and prosecution of criminal offences, the disclosure of which
can jeopardise the integrity of criminal proceedings.
(34) ‘emotion recognition system’ means an AI system for the purpose of identifying or
inferring emotions or intentions of natural persons on the basis of their biometric
data;
(35) ‘biometric categorisation system’ means an AI system for the purpose of assigning
natural persons to specific categories on the basis of their biometric data unless
ancillary to another commercial service and strictly necessary for objective technical
reasons;
(36) ‘remote biometric identification system’ means an AI system for the purpose of
identifying natural persons, without their active involvement, typically at a distance
through the comparison of a person’s biometric data with the biometric data
contained in a reference database;
(37) ‘‘real-time’ remote biometric identification system’ means a remote biometric
identification system whereby the capturing of biometric data, the comparison and
the identification all occur without a significant delay. This comprises not only
instant identification, but also limited short delays in order to avoid circumvention.
(38) ‘‘post’ remote biometric identification system’ means a remote biometric
identification system other than a ‘real-time’ remote biometric identification system;
(39) ‘publicly accessible space’ means any publicly or privately owned physical place
accessible to an undetermined number of natural persons, regardless of whether
certain conditions for access may apply, and regardless of the potential capacity
restrictions;
(40) ‘law enforcement authority’ means: (a) any public authority competent for the prevention, investigation, detection or
prosecution of criminal offences or the execution of criminal penalties,
including the safeguarding against and the prevention of threats to public
security; or
(b) any other body or entity entrusted by Member State law to exercise public
authority and public powers for the purposes of the prevention, investigation,
detection or prosecution of criminal offences or the execution of criminal
penalties, including the safeguarding against and the prevention of threats to
public security;
(41) ‘law enforcement’ means activities carried out by law enforcement authorities or on
their behalf for the prevention, investigation, detection or prosecution of criminal
offences or the execution of criminal penalties, including the safeguarding against
and the prevention of threats to public security;
(42) ‘Artificial Intelligence Office’ means the Commission’s function of contributing to the
implementation, monitoring and supervision of AI systems and AI governance.
References in this Regulation to the Artificial Intelligence office shall be understood
as references to the Commission.
(43) ‘national competent authority’ means any of the following: the notifying authority
and the market surveillance authority. As regards AI systems put into service or used
by EU institutions, agencies, offices and bodies, any reference to national competent
authorities or market surveillance authorities in this Regulation shall be understood
as referring to the European Data Protection Supervisor;
(44) ‘serious incident’ means any incident or malfunctioning of an AI system that directly
or indirectly leads to any of the following: (a) the death of a person or serious damage to a person’s health;
(b) a serious and irreversible disruption of the management and operation of
critical infrastructure.
(ba) breach of obligations under Union law intended to protect fundamental rights;
(bb) serious damage to property or the environment.
(44a) 'personal data' means personal data as defined in Article 4, point (1) of
Regulation (EU) 2016/679 ;
(44c) ‘non-personal data’ means data other than personal data as defined in point (1)
of Article 4 of Regulation (EU) 2016/679;
(be) ‘profiling’ means any form of automated processing of personal data as defined
in point (4) of Article 4 of Regulation (EU) 2016/679; or in the case of law
enforcement authorities – in point 4 of Article 3 of Directive (EU) 2016/680 or,
in the case of Union institutions, bodies, offices or agencies, in point 5 Article
3 of Regulation (EU) 2018/1725;
(bf) ‘real world testing plan’ means a document that describes the objectives,
methodology, geographical, population and temporal scope, monitoring,
organisation and conduct of testing in real world conditions;
(44eb) ‘sandbox plan’ means a document agreed between the participating provider
and the competent authority describing the objectives, conditions, timeframe,
methodology and requirements for the activities carried out within the sandbox.
(bg) 'AI regulatory sandbox’ means a concrete and controlled framework set up by a
competent authority which offers providers or prospective providers of AI
systems the possibility to develop, train, validate and test, where appropriate in
real world conditions, an innovative AI system, pursuant to a sandbox plan for
a limited time under regulatory supervision.
(bh) ‘AI literacy’ refers to skills, knowledge and understanding that allows
providers, users and affected persons, taking into account their respective rights
and obligations in the context of this Regulation, to make an informed
deployment of AI systems, as well as to gain awareness about the opportunities
and risks of AI and possible harm it can cause.
(bi) ‘testing in real world conditions’ means the temporary testing of an AI system
for its intended purpose in real world conditions outside of a laboratory or
otherwise simulated environment with a view to gathering reliable and robust
data and to assessing and verifying the conformity of the AI system with the
requirements of this Regulation; testing in real world conditions shall not be
considered as placing the AI system on the market or putting it into service
within the meaning of this Regulation, provided that all conditions under
Article 53 or Article 54a are fulfilled;
(bj) ‘subject’ for the purpose of real world testing means a natural person who
participates in testing in real world conditions;
(bk) ‘informed consent’ means a subject's freely given, specific, unambiguous and
voluntary expression of his or her willingness to participate in a particular
testing in real world conditions, after having been informed of all aspects of the
testing that are relevant to the subject's decision to participate;
(bl) "deep fake" means AI generated or manipulated image, audio or video content
that resembles existing persons, objects, places or other entities or events and
would falsely appear to a person to be authentic or truthful
(44e) ‘widespread infringement’ means any act or omission contrary to Union law
that protects the interest of individuals: (a) which has harmed or is likely to harm the collective interests of individuals
residing in at least two Member States other than the Member State, in which: (i) the act or omission originated or took place;
(ii) the provider concerned, or, where applicable, its authorised
representative is established; or,
(iii) the deployer is established, when the infringement is committed by the
deployer;
(b) which protects the interests of individuals, that have caused, cause or are
likely to cause harm to the collective interests of individuals and that
have common features, including the same unlawful practice, the same
interest being infringed and that are occurring concurrently, committed
by the same operator, in at least three Member States;
(44h) ‘critical infrastructure’ means an asset, a facility, equipment, a network or a system,
or a part of thereof, which is necessary for the provision of an essential service
within the meaning of Article 2(4) of Directive (EU) 2022/2557;
(44b) ‘general purpose AI model’ means an AI model, including when trained with a large
amount of data using self-supervision at scale, that displays significant generality and
is capable to competently perform a wide range of distinct tasks regardless of the
way the model is placed on the market and that can be integrated into a variety of
downstream systems or applications. This does not cover AI models that are used
before release on the market for research, development and prototyping activities.
(44c) ‘high-impact capabilities’ in general purpose AI models means capabilities that
match or exceed the capabilities recorded in the most advanced general purpose AI
models.
(44d) ‘systemic risk at Union level’ means a risk that is specific to the high-impact
capabilities of general-purpose AI models, having a significant impact on the internal
market due to its reach, and with actual or reasonably foreseeable negative effects on
public health, safety, public security, fundamental rights, or the society as a whole,
that can be propagated at scale across the value chain.
(44e) ‘general-purpose AI system’ means an AI system which is based on a general-
purpose AI model , that has the capability to serve a variety of purposes, both for
direct use as well as for integration in other AI systems;
(44f) ’floating-point operation’ means any mathematical operation or assignment
involving floating-point numbers, which are a subset of the real numbers typically
represented on computers by an integer of fixed precision scaled by an integer
exponent of a fixed base.
(44g) ‘downstream provider’ means a provider of an AI system, including a general-
purpose AI system, which integrates an AI model, regardless of whether the model is
provided by themselves and vertically integrated or provided by another entity based
on contractual relations.