HAVING REGARD to Article 5 b) of the Convention on the Organisation for Economic Co-operation and
Development of 14 December 1960;
HAVING REGARD to the OECD Guidelines for Multinational Enterprises [OECD/LEGAL/0144];
Recommendation of the Council concerning Guidelines Governing the Protection of Privacy and Transborder
Flows of Personal Data [OECD/LEGAL/0188]; Recommendation of the Council concerning Guidelines for
Cryptography Policy [OECD/LEGAL/0289]; Recommendation of the Council for Enhanced Access and More
Effective Use of Public Sector Information [OECD/LEGAL/0362]; Recommendation of the Council on Digital
Security Risk Management for Economic and Social Prosperity [OECD/LEGAL/0415]; Recommendation of
the Council on Consumer Protection in E-commerce [OECD/LEGAL/0422]; Declaration on the Digital
Economy: Innovation, Growth and Social Prosperity (Cancún Declaration) [OECD/LEGAL/0426]; Declaration
on Strengthening SMEs and Entrepreneurship for Productivity and Inclusive Growth [OECD/LEGAL/0439];
as well as the 2016 Ministerial Statement on Building more Resilient and Inclusive Labour Markets, adopted
at the OECD Labour and Employment Ministerial Meeting;
HAVING REGARD to the Sustainable Development Goals set out in the 2030 Agenda for Sustainable
Development adopted by the United Nations General Assembly (A/RES/70/1) as well as the 1948 Universal
Declaration of Human Rights;
HAVING REGARD to the important work being carried out on artificial intelligence (hereafter, “AI”) in other
international governmental and non-governmental fora;
RECOGNISING that AI has pervasive, far-reaching and global implications that are transforming societies,
economic sectors and the world of work, and are likely to increasingly do so in the future;
RECOGNISING that AI has the potential to improve the welfare and well-being of people, to contribute to
positive sustainable global economic activity, to increase innovation and productivity, and to help respond to
key global challenges;
RECOGNISING that, at the same time, these transformations may have disparate effects within, and
between societies and economies, notably regarding economic shifts, competition, transitions in the labour
market, inequalities, and implications for democracy and human rights, privacy and data protection, and
digital security;
RECOGNISING that trust is a key enabler of digital transformation; that, although the nature of future AI
applications and their implications may be hard to foresee, the trustworthiness of AI systems is a key factor
for the diffusion and adoption of AI; and that a well-informed whole-of-society public debate is necessary for
capturing the beneficial potential of the technology, while limiting the risks associated with it;
UNDERLINING that certain existing national and international legal, regulatory and policy frameworks
already have relevance to AI, including those related to human rights, consumer and personal data
protection, intellectual property rights, responsible business conduct, and competition, while noting that the
appropriateness of some frameworks may need to be assessed and new approaches developed;
RECOGNISING that given the rapid development and implementation of AI, there is a need for a stable
policy environment that promotes a human-centric approach to trustworthy AI, that fosters research,
preserves economic incentives to innovate, and that applies to all stakeholders according to their role and
the context;
CONSIDERING that embracing the opportunities offered, and addressing the challenges raised, by AI
applications, and empowering stakeholders to engage is essential to fostering adoption of trustworthy AI in
society, and to turning AI trustworthiness into a competitive parameter in the global marketplace;