ARTICLE
12 August 2025

AMF Publishes Draft Guideline On AI Use By Financial Institutions

TL
Torys LLP

Contributor

Torys LLP is a respected international business law firm with a reputation for quality, innovation and teamwork. Our experience, our collaborative practice style, and the insight and imagination we bring to our work have made us our clients' choice for their largest and most complex transactions as well as for general matters in which strategic advice is key.
In response to the widespread adoption of AI in financial institutions, Québec's Autorité des marchés financiers (AMF) recently published draft guidelines on the use of AI systems...
Canada Finance and Banking

In response to the widespread adoption of AI in financial institutions, Québec's Autorité des marchés financiers (AMF) recently published draft guidelines on the use of AI systems in financial institutions1. The draft guidelines apply to authorized insurers, financial services cooperatives, trust companies, and deposit-taking institutions. The AMF is accepting comments from the public on the draft guidelines until November 7, 2025.

What you need to know

  • The draft guidelines establish the AMF's expectations in relation to the measures financial institutions should take to mitigate risks associated with use of AI systems.
  • The draft guidelines largely align with international standards, including those related to risk assessment, testing and monitoring, governance, transparency, and the ethical treatment of customers.
  • Although the guidelines are still in draft form, they provide important insight into what the AMF considers best practices for using AI. Because these are so closely aligned with international standards, compliance with the guidelines will help align financial institutions with international expectations on AI ethics and safety.

Overview

The AMF is the regulatory and oversight body for Québec's financial sector. It ensures that individuals and firms in the financial sector comply with applicable laws and publishes guidelines to ensure that institutions manage risks appropriately and effectively.

The AMF adopts the definition of AI established by the Organisation for Economic Cooperation and Development: "a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment"2. The draft guidelines also provide definitions for a series of common AI-related terms and concepts.

The AMF's draft guidelines express its expectation that each financial institution will:

  • assign risk ratings to each of its AI systems;
  • develop, document, approve and implement appropriate processes and controls to mitigate risk in each stage of an AI system's life cycle;
  • establish governance policies, processes, and procedures and define stakeholder roles and responsibilities;
  • implement risk management policies, processes, and procedures; and
  • ensure fair and ethical treatment of customers.

Assigning risk ratings to AI systems

Financial institutions should create processes to rate the risk of each AI system they use, which will aid in assessing which policies and procedures will be applicable to that system throughout its various lifecycles.

Factors used in assessing risk may include:

  • the characteristics of the system and its data (such as the quality of data used and whether it contains personal information, the degree of explainability of the system and how much insight the institution has into its design parameters);
  • any system controls (including the effectiveness of the bias correction process, the risk of re-identifying personal or confidential information, and the risk of participating in the financial exclusion of a group); and
  • the institution's exposure (that is, how critical an AI system is to the institution, the type and volume of customers that could be impacted by an issue with the system, and the institution's dependence on a third party for use of the system).

Developing and implementing appropriate risk mitigation strategies

Financial institutions should develop processes and controls for each stage of an AI system's lifecycle that are proportionally responsive to the system's risk rating. This means that AI systems with higher risk ratings will need to be monitored and corrected more frequently than those with lower risk ratings.

For high-risk AI systems, institutions should document the technical elements of the system, its intended use, and any controls that have been implemented. This documentation should include records of performance assessment, the availability of comparable systems in the event of an AI system's failure, data used for training and testing, and choices made in relation to transparency, explainability, and quality of data.

Additionally, institutions should take steps to:

  • justify their choice to use a particular AI system before moving into design and/or procurement, taking into consideration the institution's needs, risk tolerance, and available alternatives;
  • ensure that the quality of the data used by the AI system is appropriate and corrected when needed to avoid perpetuating biases;
  • implement processes to ensure that the institution's design and procurement of AI systems favours systems that minimize risk by prioritizing factors like cybersecurity, explainability and robustness;
  • validate AI systems to assess the risk of cybersecurity incidents, bias and discrimination, hallucinations and intellectual property infringements (among others), and conduct an internal audit to assess whether adequate processes, procedures and controls have been implemented to mitigate those risks;
  • limit the use of AI systems with high or provisional risk ratings; and
  • continually monitor the performance and use of AI systems and their outputs.

Establishing governance policies and defining stakeholder roles and responsibilities

The AMF expects financial institutions to establish governance mechanisms that specify the roles and responsibilities of key individuals responsible for each AI system within an institution. In particular, the AMF envisions each AI system as having a system-specific manager that reports to a member of senior management, who has a bird's-eye view of—and accountability for—all AI systems within the institution.

The guideline sets out specific responsibilities for the board of directors and senior management:

  • The board should ensure that senior management fosters a culture that promotes the responsible use of AI, and that the board as a whole is sufficiently competent to understand the risks of using AI systems.
  • Senior management should develop a risk management policy related to the use of AI systems that clearly demarcates the roles and responsibilities of key stakeholders, maintain an adequate knowledge of AI systems, implement validation exercises where necessary based on an AI system's risk classification and conduct internal audits.

The draft guidelines also set out the basic competencies required for those involved in the management, procurement and design of AI systems.

Implementing risk management policies, processes and procedures

Each institution must have policies, processes and procedures that are responsive to its specific activities and risk appetite, in light of the risk classification of the AI systems it is using. Key considerations include:

  • A centralized AI system directory: institutions must maintain a directory of all the AI systems they use. This directory should include details on the model, how the model was trained, the origin and description of any training data, whether the model is isolated upstream or downstream, risk ratings, and triggers for validation processes, among other criteria.
  • Comprehensive risk management: risk assessments on the use of AI systems across the institution should be provided periodically to ensure that managers, users, validation teams and senior management have an overall view of the institution's risk exposure.

Treating customers fairly

While previous guidelines continue to apply with respect to the treatment of customers, the draft guidelines include specific expectations for AI systems. In particular, institutions are expected to ensure that:

  • codes of ethics uphold high standards for AI use;
  • discriminatory factors are documented and reported to senior management;
  • discrimination and bias found in customer-impacting systems are promptly corrected;
  • special attention is paid to the quality of secondary data sources used when the results impact customers;
  • consent is obtained from customers when their personal data is being used by an AI system;
  • customers are informed when they are interacting with an AI system (like a chatbot) and that a human can intervene upon customer request; and
  • if a decision is made by an AI system or by a human using information gathered from an AI system, explanations are made available to customers.

Conclusion

The draft guidelines are beneficial to financial institutions for two reasons: first, they provide insight into what the Québec regulator expects of financial institutions when using AI systems; and second, they provide an industry standard by which to measure existing and planned AI risk management measures. Because the draft guidelines have significant overlap with international laws and standards on AI, including principles drawn from the European Union's Artificial Intelligence Act and the OECD's Recommendation of the Council on Artificial Intelligence, compliance with the guidelines will help align financial institutions with international expectations on AI ethics and safety.

Footnotes

1. Autorité des marchés financiers, Ligne directrice sur l'utilisation de l'intelligence artificielle, July 2025 (available only in French).

2. See the OECD's Recommendation of the Council on Artificial Intelligence.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More