CSA STAFF NOTICE AND CONSULTATION 11-348 THE AI REVOLUTION: REDEFINING CANADIAN SECURITIES LAW
PART 2 – REGISTRANT GUIDANCE
(For dealers, advisors and investment fund managers and the 81-series funds they manage)
Brian Koscak, PCMA Vice Chair
This article is Part 2 in a three-part series examining the Canadian Securities Administrators’ (the CSA) publication of CSA Staff Notice and Consultation 11-348 – Applicability of Canadian Securities Laws and the Use of Artificial Intelligence Systems in Capital Markets (the Notice). This article examines the Notice’s specific application to investment fund managers (IFMs), dealers and advisors (collectively, Registrants).
Part 1 discussed the overarching themes identified by the CSA, including technology neutrality in securities regulation, artificial intelligence (AI) governance and oversight, the explainability requirements, disclosure obligations, and management of AI-related conflicts of interest. You can access Part 1 here.
These series of articles seek to provide certain Market Participants[1] with a comprehensive understanding of how Canadian securities laws regulate “AI systems”[2] in our capital markets and their related obligations.
REGISTRANTS
(a) General Obligations
The CSA’s treatment of Registrant obligations represents a balance between innovation and regulation. While maintaining the fundamental requirement that investment-related activities require registration, the Notice acknowledges that AI is changing how these activities are performed.
Although National Instrument 31-103 Registration Requirements, Exemptions and Ongoing Registrant Obligations (NI 31-103) remains the cornerstone of Registrant regulation, its application to AI systems requires a thorough and cautious interpretation to ensure its effective implementation. Most notable is the CSA’s position that automation doesn’t change accountability. Whether a human or an AI system is making decisions, Registrants remain bound by the overarching standard of care requiring fairness, honesty, and good faith in dealing with clients.
The Notice clarifies that members of the Canadian Investment Regulatory Organization (CIRO) must comply with both CIRO requirements and those under applicable securities law, creating a comprehensive oversight structure for AI deployment in registered firms.
(b) Applying for Registration or Updating Registration Information- Required Filings
Firms applying for registration must describe their business plans, including operating models that involve AI systems and usage. Ongoing, Registrants must also update their disclosures if they add AI systems and usage to their business and operations that might affect registerable services provided to clients.
The Notice states that CSA members will examine the risks that AI systems bring to both the Registrant’s business and its clients, likely requesting detailed information about AI deployment plans. This isn’t a rubber-stamp process, the CSA is prepared to impose tailored terms and conditions on firms using AI systems.
The Notice encourages early engagement with CSA members. Firms considering AI deployment are “strongly encouraged to contact staff at an early stage,” a requirement that could help prevent problems before they arise. It’s a collaborative approach to regulation that recognizes the complexity of AI integration.
(c) Operational Efficiency Gains (Back-office administration)
The CSA’s approach to operational efficiency is pragmatic. They acknowledge that AI systems can improve back-office operations, from risk management to cybersecurity, client information protection, and regulatory/client reporting. As stated in the Notice,
“AI systems can help advisers and dealers to make know-your-product research, know-your-client (KYC) information gathering, and client onboarding processes more efficient and have the potential to help registrants make better investment recommendations or decisions for their clients.”
The CSA emphasizes that efficiency gains must be balanced with governance structures. The message is clear: automation for automation’s sake isn’t the goal. Any AI-driven efficiency improvements must be implemented within a framework that ensures proper oversight and risk management.
(d) Compliance Structure and System of Controls
As discussed in the Notice, the CSA requires firms to maintain a system of controls and supervision that provides reasonable assurance of compliance with securities law and prudent risk management. Registrants must have tailored policies and procedures (P&Ps) that specifically address the unique challenges posed by AI. Moreover, Registrants must maintain books and records of AI decision-making and provide an “appropriate degree of explainability” to satisfy record keeping requirements under NI 31-103.
This isn’t about tweaking existing P&P frameworks, it’s about developing new approaches that can handle the unique challenges posed by AI. From testing protocols to monitoring systems, Registrants may have to rethink how compliance with securities laws and regulations works in an AI-enabled world.
(e) Outsourcing
The CSA acknowledges that many Registrants may want to use third-party service providers (including affiliates) that use AI system. However, the CSA draws a bright line: you can outsource support activities, but you cannot outsource the registerable activity. This is not new, however, it is a distinction that will shape how Registrants approach their AI systems.
The Notice reminds that Registrants must undertake thorough due diligence before contracting for outsourced services (this would include services that are based on or enhanced by AI systems) and maintain ongoing supervision of service providers. This is a continuous process of oversight and verification.
In addition, the CSA requires Registrants to have employees or advisors with specialized skills in both AI and registrant conduct requirements. It’s not enough to understand either technology or securities registration and regulation, a firm needs people who understand both. This requirement could drive significant changes in how Registrants staff their AI initiatives.
(f) Conflicts of Interest
The CSA’s approach to AI-related conflicts of interest involving Registrants (that includes both firms and registered individuals) builds its traditional conflicts framework set out in NI 31-103 and its Companion Policy (sections 13.4 and 13.4.1 of NI 31-103). Registrants must identify and address material conflicts, but AI systems introduce novel complications, from potentially biased algorithms to complex interactions between automated systems that might not be immediately apparent. Registrants are reminded of their record keeping obligations under sections 11.5 and 11.6 of National Instrument 31-103.
The Notice provides specific examples of AI-related conflicts that firms need to watch for, including non-objective inputs that could result in biased recommendations or decisions favouring certain clients based on demographic characteristics or in favour of proprietary products without proper consideration of alternatives. A key point of interest within the CSA’s recommendations is the potential necessity of exploring and implementing alternative testing and monitoring methods specifically designed for AI systems that are not fully explainable, thereby addressing potential challenges faced by Registrants.
In order to address such conflicts, the CSA suggests using statistical analysis, pattern detection algorithms, and even AI systems designed to detect bias in other AI systems. (i.e., AI review of AI). It’s a technology-forward approach to securities regulation that recognizes sometimes the best way to monitor AI is with more AI; properly governed, of course.
ADVISERS AND DEALERS
The Canadian securities registration regime requires dual registration for both firms and individual representatives providing registerable services (i.e., advising and trading in securities). This framework is designed to protect investors who rely on a Registrant’s advice and decisions. Registered representatives must maintain high proficiency standards and remain accountable for their investment recommendations, regardless of whether AI systems play a supporting or primary role in service delivery.
Advisers and dealers exploring AI applications in their operations must implement testing protocols that align with each system’s role, including contingency planning for operations if potential material deficiencies are detected during testing. The testing must occur before and after implementation, ensuring continuous monitoring of AI system performance and reliability.
Clear client disclosure about AI system usage and associated risks also remains paramount. As stated in the Notice,
“[i]t will be important to disclose to clients in a clear and meaningful manner any use of AI systems that may directly affect the registerable services provided to them and any associated risks, consistent with the relationship disclosure information requirements in section 14.2 of NI 31-103 and the duty to deal fairly, honestly and in good faith with clients.”
The Notice sets out a number of use cases where AI systems may be used be Registrants, which are discussed below.
(a) Trade Execution
While the CSA acknowledges that AI can make trade execution more efficient, such as through execution quality improvement tools, the Notice emphasizes that responsibility for registerable activity remains unchanged. Whether a human or an AI is executing trades, the same fundamental obligations apply.
The CSA recognizes that AI systems can enhance direct market access (DMA) capabilities but insists on maintaining human oversight. The requirement for “appropriate explainability” is important. The CSA states that there must be enough transparency for human monitors to detect and correct errors in a timely manner (i.e., keeping “humans in the loop”).
The Notice also addresses the relationship between AI-driven execution and market manipulation risks. Firms must ensure their AI systems don’t engage in manipulative trading practices, even unintentionally. This expectation could significantly influence how trading algorithms are designed and monitored.
(b) KYC and Onboarding
While embracing AI’s potential to enhance KYC information gathering, the CSA maintains the requirement for “meaningful interaction” between clients and Registrants. This doesn’t necessarily mean face-to-face meetings. The Notice acknowledges that meaningful interactions can occur through various mediums, including technology-mediated communications (i.e., emails and Zoom calls).
The treatment of automated processes is particularly nuanced. The CSA allows for automated KYC processes under certain circumstances, such as online advisers operating under the “call-as-needed” model. However, this comes with important caveats: there must be sufficient explainability for human oversight to monitor and correct errors in a timely manner, and the technology must facilitate rather than replace meaningful client interaction.
Privacy and confidentiality requirements get special attention in this context. The Notice emphasizes that while AI can enhance KYC processes, Registrants must ensure robust protection of client information. It’s a reminder that technological innovation can’t come at the expense of client privacy.
(c) Client Support
The CSA’s view on AI-powered client support represents a practical embrace of innovation. They acknowledge the role of AI chatbots and other automated support systems in facilitating general client support functions, including complaint handling. However, this comes with a crucial caveat: these systems must deliver accurate information consistently.
The Notice draws important distinctions between different types of client support functions. While AI can handle routine inquiries and information delivery, more complex interactions, particularly those involving investment advice or complaint resolution, require appropriate human oversight. It’s a tiered approach that matches the level of automation to the complexity and risk of the task.
The CSA emphasizes the need for ongoing monitoring and verification of AI-powered support systems. Registered firms must regularly test these systems to ensure they’re providing accurate information and appropriately escalating complex issues to human staff. This is a requirement that could shape how Registrants approach customer service automation.
(d) Decision-Making Support
The CSA’s treatment of AI in investment decision-making is nuanced. They recognize AI’s potential to enhance decision-making by efficiently gathering and analyzing information, including drawing on a wider range of sources than humanly possible. However, they’re clear that AI should support, not replace, human judgment in investment decisions (again, the “human in the loop” concept).
The Notice provides specific examples of acceptable AI support functions, from forecasting market movements to monitoring prescribed inputs for changes in relevant criteria. What’s particularly noteworthy is the emphasis on verification: The CSA stresses that Registrants must take reasonable steps to verify the quality and accuracy of AI-generated information before acting on it.
The CSA reminds Registrants that trades must ultimately be recommended or directed by a Registrant, not the AI system. As the CSA states, it requires that Registrants “treat [AI] information as no more than an input for their own decision-making, so that trades are ultimately recommended or directed by the [R]egistrant.” This maintains clear lines of responsibility while allowing Registrants to leverage AI’s analytical capabilities.
(e) Limited Automated Decisions
The CSA permits limited AI-automated decision-making while imposing strict oversight requirements. They acknowledge that AI systems can execute certain decisions automatically, like portfolio rebalancing or dynamic hedging, provided these decisions fall within narrowly prescribed constraints and have appropriate human oversight.
The treatment of different types of automated decisions is particularly sophisticated. The Notice distinguishes between decisions made for a firm’s own account versus those made for client accounts, with stricter requirements for the latter. When client accounts are involved, the CSA demands a high degree of explainability to ensure Registrants maintain sufficient understanding and control.
The Notice requires Registrants to consult with CSA members well in advance of launching any automated decision-making systems. This isn’t just about notification; rather, the CSA expects Registrants to demonstrate that all concerns (those outlined in the Notice and any others that become apparent during the consultation) have been fully addressed. It’s a collaborative approach to securities regulation that could help prevent problems before they arise.
PORTFOLIO MANAGEMENT
The CSA takes its firmest stance when it comes to fully autonomous AI portfolio management. At the current stage of AI development, they view it as effectively impossible for a Registrant using such a system to consistently satisfy regulatory requirements or reliably deliver desired outcomes for clients. It’s a strong position that could significantly influence how AI is deployed in wealth management.
The Notice acknowledges that while AI can support portfolio management decisions, it cannot replace the judgment of advising representatives. This maintains the highest standards of Registrant proficiency and conduct for discretionary investment management services. It’s a recognition that some aspects of financial services still require human judgment and accountability.
What makes this Notice particularly powerful is its implicit challenge to the industry: prove that AI can meet these standards before attempting fully autonomous portfolio management. Rather than simply saying “no,” the CSA has established clear criteria that future AI systems would need to meet.
INVESTMENT FUND MANAGERS AND 81-SERIES NATIONAL INSTRUMENTS INVOLVING REPORTING ISSUER INVESTMENT FUNDS
An IFM directs the business, operations or affairs of an investment fund. IFMs are often registered in another registration category, such as portfolio manager and/or exempt market dealer. Therefore, IFMs often wear multiple hats. This means IFMs must navigate multiple layers of regulatory requirements when implementing AI systems, a complexity the CSA addresses head-on.
The Notice acknowledges that IFMs may use AI systems to help fulfill their fiduciary duties through portfolio monitoring, risk management, and compliance functions, but it states they must maintain rigorous oversight to ensure these systems operate transparently, are explainable and without bias and conflicts of interest. While IFMs can develop or outsource AI systems, their fundamental fiduciary responsibility to the funds they manage (i.e., monitoring investments in a fund’s portfolio, risk management, compliance, etc.) remains unchanged, requiring them to understand and actively supervise any AI tools used in fund management to meet investment objectives and maintain compliance with regulatory obligations.
The Notice states that securities law, including the 81-series National Instruments involving investment funds, prescribe operational and disclosure requirements for investment funds that are reporting issuer. IFMs are also subject to a standard of care. The use of AI systems by IFMs and investment funds would be subject to such applicable provisions. Accordingly, the Notice provides guidance involving reporting issuer investment funds as discussed below.
(a) Disclosure Obligations (Avoiding ‘AI Washing’)
The CSA’s approach to AI disclosure for IFMs is detailed. If an IFM is using AI to help meet a fund’s investment objectives and strategies, this usage must be clearly disclosed in an investment fund’s offering documents (i.e., prospectus and summary documents such as the ETF/Fund Facts, as applicable). But mere disclosure isn’t enough; the CSA expects IFMs to define what they mean by “AI” and explain exactly how these systems are integrated into the fund’s portfolio management process.
What’s of particular importance is the CSA’s comments on AI marketing. If a fund’s use of AI is marketed as a material investment strategy, then it must be disclosed as an investment objective; a requirement that triggers specific regulatory obligations under National Instrument 81-102. This isn’t just about transparency; it’s about ensuring investment funds can’t use AI as a marketing gimmick without backing it up with substance.
The Notice takes direct aim at “AI washing,” demanding that investment funds avoid vague, unsubstantiated statements designed to attract investors. IFMs must provide clear, accurate information about how AI systems are being used and integrated into fund operations. It’s a requirement that could shape how investment funds market their AI capabilities.
(b) Risk Factors
The CSA requires IFMs to include appropriate risk disclosure about their use of AI systems in an investment fund’s offering documents. However, these disclosures must be commensurate with the actual use of AI systems and provided in context with the investment fund’s investment objectives and strategies. It’s not enough to include boilerplate AI risk warnings. From model drift to data quality issues, the CSA expects investment funds to identify and explain AI-specific risks that might affect fund performance or operations.
The CSA emphasizes that risk disclosures must be meaningful and relevant to investors. The CSA is effectively saying that IFMs need to explain AI risks in terms that investors can understand and use to make informed decisions. It’s a requirement that could improve the quality of risk disclosure relating to AI in investment fund documents.
(c) Fundamental Changes
The CSA’s treatment of AI-related fundamental changes is of particular interest because it recognizes that AI integration may not be just a technical update; it can fundamentally alter how a fund operates. If an existing fund’s deployment of AI systems would be considered a material investment strategy, it must be disclosed as an investment objective and requires securityholder approval before implementation, as required under applicable securities law.
The Notice reminds IFMs that when AI integration constitutes a material change, this requires public notification through the material change reporting regime. This isn’t just about paperwork, it’s about ensuring that significant changes in how a fund operates are properly communicated to investors and the market.
Most significantly, the CSA requires IFMs to carefully consider whether adding AI systems constitutes a fundamental change requiring shareholder approval. This requirement could significantly influence how IFMs approach AI adoption, potentially leading to more gradual, well-communicated implementations rather than sudden technological shifts.
(d) Sales Communications
The Notice states that an IFM cannot make any untrue or misleading statements about AI usage in any form of marketing material, from website content to social media posts. For example, the CSA requires a balanced presentation. If a fund’s sales communications tout the benefits of using AI systems, they must give equal prominence to associated risks and limitations. The CSA is effectively saying you can’t just highlight the upside of AI, you need to tell the whole story (i.e., the risks or downside).
The CSA also requires IFMs to have P&Ps for reviewing AI-related marketing materials. These reviews must ensure that claims about AI usage are true, not exaggerated, and consistent with regulatory documents. This requirement could impact how funds market their AI capabilities.
(e) Use of AI Indices by Investment Funds
The CSA acknowledges that index providers may use AI systems to generate index composition, however, they state that IFMs must establish clear criteria for when such indices can be used by passive investment funds. The key requirements focus on transparency and the absence of discretion.
The CSA states that if an AI-generated index can’t meet the requirements for transparency and rule-based composition, funds tracking it cannot market themselves as index funds. Instead, they must be treated as actively managed funds; a distinction with implications for marketing and regulatory requirements.
The emphasis on transparency in index methodology is noteworthy. Funds must be able to explain not just what the index is doing, but how it’s doing it. This could influence how AI is used in index construction and potentially lead to the development of more explainable AI systems in this space.
(f) Conflicts of Interest
The CSA’s approach to AI-related conflicts of interest for IFMs builds on existing frameworks, set out in National Instrument 81-107 – Independent Review Committee for Investment Funds, while acknowledging new challenges. The requirement for Independent Review Committee (IRC) oversight of AI-related conflict matters adds an important layer of governance to how these systems are deployed and operated.
The Notice requires IFMs to identify and refer any actual or perceived conflicts of interest related to AI usage to their fund’s IRC for approval or recommendation prior to using such systems. This isn’t just about traditional conflicts, it’s about understanding how AI systems might create new types of conflicts that weren’t possible in a pre-AI world.
The CSA emphasizes that IFMs must have a processes for identifying and managing AI-related conflicts on an ongoing basis. This should includes regular monitoring and testing of AI systems to ensure they’re not creating or exacerbating conflicts of interest. It’s a requirement that could shape how funds approach AI governance and oversight.
Conclusion
AI is transforming the Canadian securities landscape, and the CSA’s approach in the Notice reflects a concerted effort to balance innovation with investor protection. Part 2 of this three-part series highlights how the fundamental obligations imposed on Registrants, including IFMs, dealers, and advisors, endure even in the face of AI’s evolving capabilities. Whether it is enhancing back-office efficiencies, improving trade execution, delivering more robust KYC processes, or augmenting client support, the CSA’s message remains consistent: AI must serve as a tool to augment human judgment and not replace it. Accountability, oversight, and continuous monitoring are essential elements of the CSA’s expectations.
The Notice underscores the expectation that Registrants have P&Ps in place, ensuring that AI-driven activities align with existing securities regulation requirements. Disclosure must be meaningful and transparent; conflicts of interest must be identified and avoided or mitigated; and AI-related risks must be clearly communicated to clients and investors. For IFMs, the CSA’s cautionary stance on AI “washing”, promoting AI as a fund strategy without substantive backup, illustrates the priority placed on marketing that is not misleading and balanced risk disclosure.
As the Canadian securities industry continues to integrate AI solutions, the CSA’s proposed approach not only supports responsible AI innovation but also aim to maintain high standards of investor protection that underpin securities regulation. By fostering early engagement with regulators, emphasizing robust compliance structures, and setting out clear expectations for AI-related activities, the CSA is facilitating a future in which technology can thrive while core regulatory principles remain intact. Part 3 of this series will further explore these themes, offering additional insights into the CSA’s evolving stance on AI’s role in Canadian capital markets.
Next Steps
Part 3 of this series of articles discusses how the Notice applies to public companies that are not investment funds (i.e., non-investment funds reporting issuers, such as public REITs, mining, biotech or infrastructure reporting issuers). It discusses specific considerations reporting issuers must consider when implementing AI systems, including disclosure obligations and the management of material information related to their AI initiatives.
Part 1 of this series discusses the overarching themes identified by the CSA in its consideration of AI systems and Canadian securities law.
The CSA’s consultation period ends on March 30, 2025. The consultation provides an opportunity for industry stakeholders to shape the evolution of AI regulation in the Canadian capital markets. Through this consultation process, the CSA seeks to foster responsible AI innovation while maintaining market integrity and investor protection. The consultation questions in the Notice are set out below.
CSA Notice Consultation Questions
- Are there use cases for AI systems that you believe cannot be accommodated without new or amended rules, or targeted exemptions from current rules? Please be specific as to the changes you consider necessary.
- Should there be new or amended rules and/or guidance to address risks associated with the use of AI systems in capital markets, including related to risk management approaches to the AI system lifecycle? Should firms develop new governance frameworks or can existing ones be adapted? Should we consider adopting specific governance measures or standards (e.g. OSFI’s E-23 Guideline on Model Risk Management, ISO, NIST)?
- Data plays a critical role in the functioning of AI systems and is the basis on which their outputs are created. What considerations should Market Participants keep in mind when determining what data sources to use for the AI systems they deploy (e.g. privacy, accuracy, completeness)? What measures should Market Participants take when using AI systems to account for the unique risks tied to data sources used by AI systems (e.g. measures that would enhance privacy, accuracy, security, quality, and completeness of data)?
- What role should humans play in the oversight of AI systems (e.g. “human-in-the-loop”) and how should this role be built into a firm’s AI governance framework? Are there certain uses of AI systems in capital markets where direct human involvement in the oversight of AI systems is more important than others (e.g. use cases relying on machine learning techniques that may have lesser degrees of explainability)? Depending on the AI system, what necessary skills, knowledge, training, and expertise should be required? Please provide details and examples.
- Is it possible to effectively monitor AI systems on a continuous basis to identify variations in model output using test- driven development, including stress tests, post-trade reviews, spot checks, and corrective action in the same ways as rules-based trading algorithms in order to mitigate against risks such as model drifts and hallucinations? If so, please provide examples. Do you have suggestions for how such processes derived from the oversight of algorithmic trading systems could be adapted to AI systems for trading recommendations and decisions?
- Certain aspects of securities law require detailed documentation and tracing of decision-making. This type of recording may be difficult in the context of using models relying on certain types of AI techniques. What level of transparency/explainability should be built into an AI system during the design, planning, and building in order for an AI system’s outputs to be understood and explainable by humans? Should there be new or amended rules and/or guidance regarding the use of an AI system that offer less explainability (e.g. safeguards to independently verify the reliability of outputs)?
- FinTech solutions that rely on AI systems proposing to provide KYC and onboarding, advice, and carry out discretionary investment management challenge existing reliance on proficient individuals to carry out registerable activity. Should regulatory accommodations be made to allow for such solutions and, if so, which ones? What restrictions should be imposed to provide the same regulatory outcomes and safeguards as those provided through current proficiency requirements imposed on registered individuals?
- Given the capacity of AI systems to analyze a vast array of potential investments, should we alter our expectations relating to product shelf offerings and the universe of reasonable alternatives that representatives need to take into account in making recommendations that are suitable for clients and put clients’ interests first? How onerous would such an expanded responsibility be in terms of supervision and explainability of the AI systems used?
- Should Market Participants be subject to any additional rules relating to the use of third-party products or services that rely on AI systems? Once such a third-party product or service is in use by a Market Participant, should the third-party provider be subject to requirements, and if so, based on what factors?
- Does the increased use of AI systems in capital markets exacerbate existing vulnerabilities/systemic risks or create new ones? If so, please outline Are Market Participants adopting specific measures to mitigate against systemic risks? Should there be new or amended rules to account for these systemic risks? If so, please provide details.
Examples of systemic risks could include the following:
- AI systems working in a coordinated fashion to bring about a desired outcome, such as creating periods of market volatility in order to maximize profits;
- Widespread use of AI systems relying on the same, or limited numbers of, vendors to function (e.g., cloud or data providers), which could lead to financial stability risks resulting from a significant error or a failure with one large vendor;
- A herding effect where there is broad adoption of a single AI system or where several AI systems make similar investment or trading decisions, intentionally or unintentionally, due, for example, to similar design and data sources. This could lead to magnified market moves, including detrimental ones if a flawed AI system is widely used or is used by a sizable Market Participant;
- Widespread systemic biases in outputs of AI systems that affect efficient functioning and fairness of capital
[1] For greater clarity, these series of articles do not discuss the Notice in connection with its guidance for marketplaces and marketplace participants, clearing agencies and matching service utilities, trade repositories and derivative data reporting, designated rating organizations, and designated benchmark administrators.
[2] An “AI system” is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
[3] Office of the Superintendent of Financial Institutions (OSFI) Guideline E-23 on Model Risk Management; International Organization for Standardization (ISO): Standards for artificial intelligence (https://www.iso.org/artificial-intelligence); National Institute of Standards and Technology (NIST) AI Risk Management Framework (https://www.nist.gov/itl/ai-risk-management-framework).