First Reference company logo

Inside Internal Controls

News and discussion on implementing risk management

machine cogs image

OECD principles on artificial intelligence released

artificial intelligence

On May 22, 2019, the Organization for Economic Cooperation and Development (OECD) approved the OECD Recommendation on Artificial Intelligence. The aim of the OECD Recommendation on AI is to “foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values.” This had led the OECD to create principles for responsible stewardship of AI and policy recommendations for governments based on these principles.

OECD AI principles

The OECD Recommendation on AI contains five principles for responsible AI stewardship. These principles are meant to complement each other and serve as a basis for future policy development. The OECD AI Principles are:

  1. Inclusive growth, sustainable development and well-being: Stakeholders are encouraged to proactively engage in stewardship of AI in order to pursue beneficial outcomes.
  2. Human-centred values and fairness: Organizations and individuals that deploy and develop AI should respect the rule of law, human rights, and democratic values.
  3. Transparency and explainability: AI market participants should commit to “transparency and reasonable disclosure” for AI systems. This involves fostering global awareness and understanding of AI systems and ensuring those affected by an AI system understand the outcome and can challenge the outcome if they disagree.
  4. Robustness, security and safety: Stakeholders should focus on risk management across the AI lifecycle to ensure systems function appropriately and do not pose safety risk. In particular, developers should think about traceability of data, processes, and decisions so that AI outcomes can be analyzed.
  5. Accountability: Throughout the AI lifecycle, actors are accountable for the proper functioning of AI systems.

Actions for policymakers

The OECD Recommendation on AI also contains actions governments can take in furtherance of the principles. These include:

  1. Facilitating public and private investment in research & development to spur innovation in trustworthy AI. This includes interdisciplinary efforts to create open and representative datasets that address concerns about bias, interoperability, and privacy.
  2. Fostering accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge. Governments should invest in the underlying digital infrastructure, including the development of data trusts to support ethical data sharing.
  3. Ensuring a policy environment that will open the way to deployment of trustworthy AI systems. One option for doing so is to create controlled environments for experimentation, commonly called sandboxes.
  4. Empowering people with the skills for AI and support workers for a fair transition. This can be done by education and training programmes, social dialogue with workers, and support for those who have been displaced by automation.
  5. Co-operating across borders and sectors to progress on responsible stewardship of trustworthy AI. Governments should work together to share knowledge, develop global technical standards, internationally comparable metrics and above all seek to forge consensus.

Global dialogue regarding AI

The OECD Recommendation on AI is one example of an ongoing global dialogue regarding the responsible development and use of AI. Other recently published AI frameworks include:

  • The International Technology Law Association recently released Responsible AI: A Global Policy FrameworkThis publication provides in-depth discussion principles related to ethical guideposts that encourage the responsible development, deployment, and use of AI.
  • The European Commission’s High-Level Expert Group on Artificial Intelligence has released Ethics Guidelines for Trustworthy AI, which aims to offer guidance for AI applications by building a conversational foundation for trust and AI. (see our blog post).
  • The Toronto Declaration seeks to protect human rights in a world of machine learning systems.
  • Singapore’s Personal Data Protection Commission has developed a proposed Model AI Governance Framework which was released in January 2019. The Model AI Governance Framework builds on the previous Discussion Paper on AI and Personal Data, and covers internal governance, decision-making models, operations management and customer relationship management for AI. (see our blog post)

Taking inspiration from these frameworks, organizations that develop, deploy and use AI systems should implement appropriate governance controls to mitigate risks and to help ensure responsible AI by design. 

By Charles S. Morgan

Follow us

McCarthy Tétrault LLP

McCarthy Tétrault is a Canadian law firm that delivers integrated business law, litigation services, tax law, real property law, labour and employment law nationally and globally.McCarthy publishes a series of blogs to share information with companies to help them comply and manage their businesses. On the Inside Internal Controls blog we will share some of those blog posts sharing their expertise among others, in the areas of Competition/Anti-trust, Corporate and Commercial Law, Intellectual Property, Privacy, Environmental Law, Technology and Litigation. Read more here
Follow us

, , ,

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.