KEY POINTS
- Advanced artificial intelligence (Al) based trading systems, particularly those using deep learning techniques, raise concerns about concentration risk and market stability, with regulators warning about potential “monoculture” effects in financial markets.
- While concerns about systemic risks merit attention, several mitigating factors suggest these are likely overstated, including model diversity, implementation differences, and continued human oversight in current applications of AI in trading.
- The opacity of deep learning and potential for emergent behaviour in reinforcement learning based trading systems create significant challenges for market abuse surveillance and reporting obligations under existing regulatory frameworks.
Financial institutions are increasingly employing various forms of AI, with machine learning being the most prevalent. Machine learning encompasses several techniques, including supervised learning (where the model learns using labelled data), unsupervised learning (where the model identifies patterns and relationships in unlabelled data) and reinforcement learning (where the model interacts with an environment and receives feedback in the form of rewards or penalties). These methods enable models to learn from vast datasets, identify patterns, predict asset price movements, and take actions with increasing levels of autonomy.
While most investment managers currently use AI for human decision-making, some firms are exploring autonomous systems using deep learning and reinforcement learning models that could execute investment decisions (or other actions) with minimal human oversight. These advanced models use artificial neural networks to identify complex patterns in large datasets, and when combined, can create systems capable of both processing vast amounts of market data and learning optimal trading strategies.
Given the growing sophistication of these AI systems, this article examines the legal and regulatory implications of their integration into securities trading and investment management. We consider two principal risks identified by regulators – systemic risk and market manipulation – and evaluate whether current regulatory frameworks in the UK are equipped to address these challenges.
WHAT ARE THE KEY SYSTEMIC RISK CONCERNS WITH AI IN FINANCIAL MARKETS?
One of the primary concerns surrounding widespread adoption of advanced AI models in securities trading and investment management is their potential to undermine market stability. This is a concern that has been raised by members of the Bank of England (BoE),1 the European Central Bank (ECB),2 the US Securities and Exchange Commission (SEC),3 the Dutch Authority for the Financial Markets (AFM),4 the International Organization of Securities Commissions (IOSCO),5 and the Financial Stability Board (FSB).6
Gary Gensler, Chair of the SEC, has warned7 that the inherent characteristics of deep learning, such as its hyper-dimensionality and insatiable demand for data, could lead to a convergence on a small number of dominant data providers and AI-as-a-Service companies. In essence, as Jonathan Hall (an external member of the BoE's Financial Policy Committee) points out:8 once a consensus emerges on the best model setup for trading algorithms, and the capacity to integrate data rises to match the quantity of readable data, “the financial incentive to allocate capital towards alternative models will not be there”.
This concentration could in turn create a “monoculture” in the financial system, where market participants draw from the same data and employ similar models, ultimately leading them to reach similar conclusions and investment strategies. The systemic implications of this convergence have prompted the ECB to warn9 specifically about its potential to distort asset prices, increase market correlations, foster herding behaviour, and even contribute to the formation of bubbles.
Another concern that has been highlighted by Hall is the potential for advanced AI systems to create more brittle and highly correlated markets during periods of stress. This is premised on the view that AI based trading systems, particularly those using deep and/or reinforcement learning techniques, may converge on similar trading strategies when exposed to the same price signals. In stress scenarios, they could exacerbate swings by acting in unison, further amplifying volatility and undermining liquidity when it is needed most. The IMF10 has echoed these concerns, noting that algorithmic trading strategies often include safety mechanisms that trigger de-risking or complete shutdowns during periods of high volatility, particularly when encountering unprecedented price movements. While these safeguards are designed to protect individual firms, their simultaneous activation across multiple market participants could create destabilising feedback loops and a sudden evaporation of market liquidity – precisely the systemic risks that Hall warns about.
These concerns are not merely theoretical. Events such as the 2010 “Flash Crash” and the 2007 “Quant Quake” serve as stark reminders of the potential for technology-driven market disruptions. In both cases, algorithmic trading strategies contributed to sudden and severe market dislocations. In the case of the 2010 “Flash Crash”, a single selling order, executed by an automated trading algorithm, triggered a chain reaction across high frequency trading firms, causing the Dow Jones Industrial Average to plunge nearly 1,000 points in a matter of minutes.
ARE THESE SYSTEMIC CONCERNS OVERSTATED?
Nevertheless, several factors suggest that the risks posed by advanced AI models to market stability may be overstated, at least for now.
First, despite the advanced capabilities of AI models, research11 by the central bank of the Netherlands and AFM indicates that most financial institutions currently favour simpler, supervised learning models (such as linear and logistic regression) over complex deep learning or reinforcement learning models. Additionally, most firms maintain significant human oversight over their trading and investment operations. This includes setting and monitoring position limits, managing risk parameters and retaining the power to halt trading during market volatility (through the use of kill-switch functionalities).
Second, it is important to recognise that deep learning techniques are not monolithic or uniform. Deep learning encompasses various architectural approaches (such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformer models), and their application in financial markets is tailored to specific tasks (eg price prediction, pattern recognition and risk assessment). This task-specific implementation of deep learning techniques, along with firm-specific choices in data inputs (as explained below), makes it unlikely that all market participants will use the same algorithms for their investment and/or trading strategies.
Third, as some market participants12 have noted, even if two investment managers use the same type of model with identical base architecture, their implementations are likely to differ significantly due to critical design and development decisions. Managers make different choices about data handling – including the type, frequency, scope, sources, structure, and preprocessing techniques – and many firms now incorporate diverse alternative datasets, such as environmental, social and governance (ESG) factors, satellite imagery or social media sentiment. These implementation differences and non-traditional data sources introduce substantial diversity into the decision-making process.
Finally, the very nature of market transactions requires counterparties with opposing views or needs. That is, even if many market participants use similar AI models that generate similar signals, trades can only occur when there are willing counterparties at the given price. Some participants will naturally take contrarian positions due to seeing different value, having different time horizons, or following alternative strategies. For example, investment managers manage funds with varying objectives and strategies, reflecting their underlying investors' varied investment time horizons and risk appetites, and it is this inherent diversity in fund mandates and investor preference that ultimately influences how investment managers deploy AI models in their investment and/or trading strategies.
WHAT ARE THE MARKET ABUSE CONCERNS?
Regulatory authorities are also concerned about the potential for deep and/or reinforcement learning based trading algorithms to engage in or facilitate market abuse.
As the AFM has noted,13 naively programmed reinforcement learning algorithms could inadvertently learn to manipulate markets. For instance, a reinforcement learning based trading algorithm, if left unchecked and without proper constraints, could learn to exploit its own ability to influence asset prices or collude with other AI systems to do the same.
Academic research14 by Wei Dou et al (2023) shows that in simulated markets, AI-driven trading agents can achieve near-cartel-like profits without being explicitly programmed to collude, through a phenomenon known as emergent communication. This occurs when autonomous AI systems (AI agents), operating in the same environment, begin to develop spontaneous patterns of behaviour that resemble communication and allow them to coordinate their actions in pursuit of (for example) profit-maximising strategies. Such interactions are generally uninterpretable to human observers, and thus difficult to monitor or control. In some cases, AI agents may even influence each other’s learning processes through a dynamic called opponent shaping, where one AI system’s actions directly impact how another system learns and behaves.
This potential for AI systems to manipulate, or be manipulated by, other AI systems has caught regulators' attention. For example, the AFM has suggested15 that regulatory authorities should focus not only on detecting agents that manipulate the market but also on making AI systems less susceptible to manipulation.
The European Commission (the Commission) has also recognised the importance of these risks, as highlighted in its recent consultation16 on AI, where it raised concerns about machine learning based trading algorithms interacting unpredictably. The Commission explicitly asks whether these interactions could lead to market manipulation or sudden liquidity issues, thus confirming that this risk is not just theoretical but one that regulators are already focusing their attention on.
HOW DOES THE CURRENT REGUATORY FRAMEWORK ADDRESS THESE RISKS?
The UK's existing financial regulatory regime is technology-agnostic and principles-based, meaning that potentially harmful behaviours by AI systems would likely fall within its scope regardless of the underlying technology. More precisely, Recital 38 of the UK Market Abuse Regulation (MAR) confirms that MAR applies to market manipulation carried out by any available means of trading, while the FCA has previously indicated17 that any attempt to exploit algorithmic trading would similarly be caught by these provisions. As the FCA noted in its decision notice to Canada Inc18 (formerly of Swift Trade Inc):
“Algorithmic and high frequency trading is a legitimate activity and therefore an abusive strategy which is designed to exploit these forms of trading is unacceptable.”
Beyond market abuse considerations, these systems would also be subject to specific algorithmic trading regulations. The MiFID II algorithmic trading requirements, in RTS 6, generally apply where a firm trades in financial instruments using an algorithm that automatically determines individual parameters of orders, whether that is through an algorithm that generates orders (ie an investment decision algorithm) or one that optimises the execution of orders by automated means (ie an order execution algorithm). Indeed, Recital 2 of RTS 6 explicitly states “any type of execution system or order management system operated by an investment firm should be covered by this Regulation”. Therefore, it is likely that the AI systems in question would fall within the scope of the MiFID II algorithmic trading requirements, albeit only for on-venue transactions, as current guidance appears to exclude OTC transactions from these requirements.
Accordingly, the crux of the regulatory challenge lies not in whether the behaviours of deep or reinforcement learning based trading algorithms would fall within this existing regime – they likely would – but in the practical difficulties of complying with such regulations. Below, we explore several high-level examples of these challenges.
Article 16(1) MAR requires operators of trading venues to report orders and transactions that could constitute insider dealing, market manipulation, or attempted insider dealing or market manipulation (together, market abuse) to the FCA without delay. Article 16(3) MAR imposes similar reporting requirements on persons professionally arranging or executing transactions, where they have reasonable suspicion that an order or transaction could constitute market abuse. The definition of persons professionally arranging or executing transactions is broad, encompassing not only executing brokers but also investment managers (AIFMs and UCITS Management Companies). As a result, MAR mandates that persons professionally dealing in in-scope financial instruments (broadly those traded on EU or UK trading venues) must submit a suspicious transaction and order report (STOR) to the FCA without delay where they reasonably suspect market abuse.
These provisions are based on the assumption that market abuse can be identified and reported by human observers or traditional surveillance systems. However, the fundamental characteristics of reinforcement learning and sophisticated AI based trading models (more generally) challenge this assumption in several ways.
First, machine learning models, particularly those using reinforcement learning, often operate as “black boxes” where the reasoning behind their outputs remains opaque, making their decisions non-interpretable or unexplainable, even to their developers.19 As such, it may not be possible for market participants and operators of trading venues to recognise market abuse solely from the trading patterns and decisions of these machine learning models.
Second, the concept of “reasonable suspicion” under Art 16(2) MAR becomes especially problematic when applied to AI-driven trading. Complex AI models, particularly those using deep learning, can identify and exploit market patterns and correlations that, while legitimate, may not be immediately recognisable to human observers. Consequently, without clear indicators or an understanding of the model's internal logic, firms may struggle to distinguish between legitimate trading strategies and potentially abusive behaviours, making it difficult to establish a solid foundation for deciding whether to submit (or not) a STOR to the FCA. Indeed, as one commentator has noted,20 the concept of market manipulation itself becomes difficult to apply in the context of advanced forms of algorithmic trading.
Third, while some commentators21 have suggested that market abuse risks may be mitigated by restricting deep learning models to the generation of trading signals – thereby separating investment decision-making from trade execution – recent research indicates this may be insufficient. For example, Scheurer22 et al (2023) demonstrate that, under specific conditions, AI systems may engage in deceptive behaviours by concealing their true objectives from their operators, even where trained to be helpful, harmless, and honest. This risk was also highlighted at the UK's AI Safety Summit23 in November 2023, where researchers demonstrated how, under certain conditions, AI bots could strategically deceive regulators by exploiting gaps in oversight.
Adding to this complexity, Professor Wellman has highlighted24 that requiring algorithms to report cases of market manipulation by other algorithms, as suggested in the FCA's April 2024 AI Update,25 could trigger an adversarial learning dynamic. In this scenario, AI based trading algorithms may learn from each other's techniques and evolve strategies to obfuscate their goals, leading to a continuous cycle where both manipulative algorithms and detection systems constantly evolve to outmanoeuvre each other.
While it is acknowledged that current AI deployment in securities trading and investment management has not reached this level of sophistication, these findings raise important considerations for future market surveillance (particularly with the rise of more agentic AI models). Indeed, the FCA has emphasised in both Market Watch 7926 and its July 2023 Dear CEO Letter27 to Proprietary Trading Firms that firms must maintain vigilant surveillance systems that keep pace with technological advancement, including developments in artificial intelligence.
In light of these challenges, it comes as no surprise that regulatory authorities are increasingly focused on explainability and human oversight.
EXPLAINABILITY AND REGULATORY COMPLIANCE
There is a growing tension between the need for explainability and the demand for high-performance models. Most trading firms prioritise the performance of AI models over their explainability, arguing that the output of these models is more important than the process behind it. This stance, however, poses fundamental challenges for regulatory compliance under RTS 6 MiFID II, which requires firms engaged in algorithmic trading to ensure that (among other things):
- they have a “full understanding” of their trading algorithms and associated risks, regardless of whether these systems were developed internally or procured from third parties. However, the inherent opacity of sophisticated AI systems makes this requirement fundamentally problematic, as the complexity of deep learning models often defies traditional methods of comprehension and documentation;
- their trading algorithms do not behave in an “unintended manner”. Yet, studies demonstrate that AI systems can optimise reward functions in ways that actively obscure their objectives from human operators. This capacity for deceptive behaviour and emergent strategies makes it virtually impossible for firms to provide meaningful assurance against unintended outcomes; and
- their compliance and risk staff have “sufficient knowledge” of algorithmic trading and strategies to effectively challenge trading staff where appropriate. This requirement becomes particularly problematic in the context of advanced AI systems, where even the technical developers may struggle to fully explain the decision-making processes. For compliance and risk staff, tasked with oversight but typically lacking deep technical expertise, this may create an insurmountable barrier to effective supervision.
These challenges highlight a fundamental misalignment between current regulatory requirements, which presume transparency and explainability, and the reality of advanced AI trading systems, where opacity and emergent behaviour are inherent characteristics rather than design flaws.
1 https://www.bankofengland.co.uk/speech/2024/may/jon-hall-speech-at-the-university-of-exeter
2 https://www.ecb.europa.eu/press/financial-stability-publications/fsr/special/html/ecb.fsrart202405_02~58c3ce5246.en.html
3 https://www.sec.gov/newsroom/speeches-statements/gensler-transcript-systemic-risk-artificial-intelligence-091924
4 https://www.afm.nl/~/profmedia/files/rapporten/2023/report-machine-learning-trading-algorithms.pdf
5 https://www.iosco.org/library/pubdocs/pdf/IOSCOPD684.pdf
6 https://www.fsb.org/2024/11/the-financial-stability-implications-of-artificial-intelligence/
7 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3723132
8 https://www.bankofengland.co.uk/speech/2024/may/jon-hall-speech-at-the-university-of-exeter
9 https://www.ecb.europa.eu/press/financial-stability-publications/fsr/special/html/ecb.fsrart202405_02~58c3ce5246.en.html
10 https://www.elibrary.imf.org/view/book/9798400277573/CH003.xml
11 https://www.dnb.nl/en/sector-news/supervision-2024/afm-and-dnb-publish-report-on-the-impact-of-ai-on-the-financial-sector-and-supervision/
12 https://www.institutionalinvestor.com/article/2dozjcglv7kxsgr3t4feo/opinion/secs-ai-driven-market-risk-worries-justified-caution-or-misplaced-concern
13 https://www.afm.nl/~/profmedia/files/rapporten/2023/report-machine-learning-trading-algorithms.pdf
14 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4452704
15 https://www.afm.nl/~/profmedia/files/rapporten/2023/report-machine-learning-trading-algorithms.pdf
16 https://finance.ec.europa.eu/document/download/054d25f5-0065-488a-96fb-2bb628c74e6f_en?filename=2024-ai-financial-sector-consultation-document_en.pdf
17 https://www.fca.org.uk/news/speeches/market-abuse-requires-dynamic-response-changing-risk-profile
18 https://www.fca.org.uk/publication/final-notices/7722656-canada-inc.pdf
19 https://www.iosco.org/library/pubdocs/pdf/IOSCOPD684.pdf
20 https://www.fca.org.uk/publication/research/eu-market-abuse-regime-is-it-fit-for-purpose.pdf
21 https://www.bankofengland.co.uk/speech/2024/may/jon-hall-speech-at-the-university-of-exeter
22 https://arxiv.org/abs/2311.07590
23 https://www.bbc.co.uk/news/technology-67302788
24 https://www.waterstechnology.com/regulation/7951639/ai-expert-warns-of-algo-based-market-manipulation
25 https://www.fca.org.uk/publication/corporate/ai-update.pdf
26 https://www.fca.org.uk/publications/newsletters/market-watch-79
27 https://www.fca.org.uk/publication/correspondence/portfolio-letter-our-supervisory-strategy-principal-trading-firms.pdf