Artificial Intelligence in Canadian Capital Markets: Examining the Ontario Securities Commission Report


Just as Tony Stark relies on J.A.R.V.I.S. (Just A Rather Very Intelligent System, for the uninitiated) to navigate the complexities of his technological empire in the Marvel Cinematic Universe, participants in Canadian capital markets are actively partaking in the development, testing and implementation of artificial intelligence (“AI”) systems to support front, middle and back-office functions. The Ontario Securities Commission (the “OSC”) speaks to these functions – admittedly, not drawing any parallels to Iron Man – in its report published on October 10, 2023, titled “Artificial Intelligence in Capital Markets – Exploring Use Cases in Ontario” (the “Report”) regarding the adoption of AI in Ontario’s capital markets.

The Report highlights current AI use cases, benefits and challenges, and looks to raise awareness of the opportunities and risks in using AI in various capacities within Canadian capital markets. Generally, the Report outlines how the OSC will implement oversight, regulation or guidance to further its statutory mandate to provide protection to investors from unfair, improper or fraudulent practices; to foster fair, efficient and competitive capital markets and confidence in capital markets; to foster capital formation; and to contribute to the stability of the financial system and the reduction of systemic risk.

General Background

The Report states that the adoption of AI in Ontario’s capital markets is at an intermediate stage with varying levels of maturity across applications and functions. AI is broadly used for efficiency improvement, revenue generation and risk management. The most widespread deployment is for pre-and post-trade process automation, liquidity forecasting, customer services and support, sales, and marketing of financial products and services, generating trading insights, and trade surveillance and detection of market manipulation. Capital markets participants are using AI to support the development of their current products and services, including asset allocation, asset price forecasting, high-frequency trading, and data quality improvements (though, according to the Report, areas such as asset allocation and risk management are at a transitional stage of adoption). The financial services industry is also exploring the use of AI for hedging and to analyze and categorize market conditions in futures markets.

The chief value driver among Canadian capital markets participants is the leveraging of natural language processing (“NLP”) to analyze huge datasets that are both structured and unstructured. NLP is a branch of AI that enables AI systems to understand, interpret and generate human language as it is spoken and written. Other value drivers are precision-enhanced predictive analysis, more robust liquidity forecasting and hedging, and increased end-user satisfaction through more personalized service.


The Report notes crucial challenges in adopting AI, namely explainability of the AI systems (i.e., the ability of individual AI users to understand and be confident in the AI output) and the following data-related concerns: massive volumes, variety, sources, privacy, aggregation, quality and consistency. Achieving explainability can be two-pronged: (i) post-hoc explainability methods that involve running explainability algorithms on a developed model to gather insights on its internal workings; and (ii) explainability by design. There is generally thought to be a broad global consensus that explainability and traceability of datasets and processes that yield an AI system’s decision should be codified as law. Canada’s upcoming Artificial Intelligence and Data Act in Bill C-27 (“AIDA”) – the country’s first proposed law to regulate AI – has transparency requirements as well, explained later in this article.

Interestingly, on data protection and privacy concerns, the Report mentions the European General Data Protection Regulation (“GDPR”) but not current or upcoming Canadian privacy or AI laws. The Report advocates against data privacy regulations that “too stringently restrict the use of public datasets” since market leaders may have trained their models on these datasets prior to regulations becoming effective. Public data drawn from stock exchanges and repositories on the internet is the chief source of the training corpus for AI models for functions like pre- and post-trade process automation, liquidity forecasting, asset allocation, high-frequency trading, etc.

Our understanding is that the GDPR mandates that data subjects should be notified if their personal data is sourced from publicly available sources. If the processing relates to personal data that is manifestly made public by the data subject, no explicit consent is required, but a lawful basis still needs to be established. Canada’s federal law governing personal information in the private sector, the Personal Information and Protection of Electronic Documents Act (“PIPEDA”) and the upcoming Consumer Privacy Protection Act (“CPPA”) in Bill C-27 that will replace PIPEDA, allow an organization to collect, use or disclose personal information without the knowledge or consent of the individual, if such information is publicly available and specified by regulations.

The Report also offers synthetic data as a solution for imbalance or scarcity of data, or if data is expensive to obtain in large volumes or subject to privacy or confidentiality concerns. Synthetic data refers to data artificially created by computer algorithms that mimic real data.

Interoperability of Data

A key value driver in Canadian capital markets that the Report omits to address, as it relates to AI, is the interoperability of data. Data interoperability involves the capacity of two or more systems to communicate and exchange information. Each system would have the ability to interpret the meaning of received information and seamlessly incorporate it with other data stored in that system.

Data interoperability can enable the seamless exchange and usage of financial information among investors, traders, financial institutions, regulators and other market participants. With access to standardized and interoperable data, market participants may be able to make more informed and timely investment decisions. Investors may also be able to enhance portfolio diversification and improve risk management with the ability to analyze data from various sources and asset classes. High-frequency trading and quantitative analysis rely on enormous amounts of data. Interoperable data feeds can enhance these strategies by providing real-time information for decision-making and execution.

Achieving data interoperability, however, will require standardized data formats, protocols and industry-wide data-sharing agreements. Blockchain and other distributed ledger technologies also have the potential to revolutionize data interoperability by providing secure, transparent and tamper-resistant data sharing platforms.

Impact of Bill C-27

Under AIDA, a person who makes or manages a high-impact AI system (“high-impact” to be defined in AIDA’s regulations) must explain the system in plain language on a publicly available website (time and manner may be prescribed by regulation), including how it is used, types of content it generates, decisions and recommendations it provides, and mitigation measures. AI models, such as reinforcement learning and neural networks, show encouraging outcomes in the domain of asset allocation, but the decision-making processes in both models remain hard to explain. This could prove challenging for investors and fund managers who often require a clear understanding of the rationale behind investment decisions.

AIDA obligates those responsible for high-impact systems to establish measures to identify, assess and mitigate the risks of harm or biased output resulting from using those systems. Biased predictions can create feedback loops. For example, if an AI model predicts certain asset values based on biased data, market participants reacting to those predictions may inadvertently reinforce and perpetuate the bias.

United States Securities and Exchange Commission

The Report was published only a few months after the United States Securities and Exchange Commission (“SEC”) proposed new rules in a press release dated July 26, 2023, (“SEC PR”) that we understand would require broker-dealers and investment advisers (“Firms”) to implement measures that address conflicts of interest related to their utilization of predictive data analytics and similar technologies in investor interactions. We understand that the SEC’s hope is to prevent situations where Firms’ priorities take precedence over the best interests of investors.

SEC Chair Gary Gensler is quoted as saying that:

Today’s predictive data analytics models provide an increasing ability to make predictions about each of us as individuals. This raises possibilities that conflicts may arise to the extent that advisers or brokers are optimizing to place their interests ahead of their investors’ interests. When offering advice or recommendations, firms are obligated to eliminate or otherwise address any conflicts of interest and not put their own interests ahead of their investors’ interests. I believe that, if adopted, these rules would help protect investors from conflicts of interest — and require that, regardless of the technology used, firms meet their obligations not to place their own interests ahead of investors’ interests.

In the SEC PR, the SEC states that the adoption of technologies by Firms to optimize, predict, guide, forecast or direct investment-related behaviours has grown significantly. The SEC continued in the SEC PR by declaring that while these technologies can offer benefits such as improved market access and efficiency, concerns arise when their use prioritizes the interests of Firms over those of investors, potentially causing financial harm. The scalability of these technologies poses a risk of conflicts of interest on a broader and more impactful scale than before. We understand that proposed SEC rules aim to address this by requiring Firms to assess and mitigate conflicts, allowing the use of specific risk-mitigation tools while maintaining written policies and procedures for compliance.


To date, the OSC is the only securities regulator in Canada to publish a report on AI use cases. The Report calls for continued collaboration among federal and provincial governments, securities and financial services regulators to develop consistent regulations. Given the likely role and importance of AI in the future developments of Canadian capital markets, we believe the Report will be the first, but not the last, publication by Canadian securities regulators on the topic of the evolving world of AI, particularly as Canadian federal privacy and AI laws evolve.

The Technology Group and Capital Markets Group at Aird & Berlis will continue to monitor developments in AI’s involvement in capital markets. Please contact a member of these groups if you have questions or require assistance with any matter related to the foregoing.