Autonomous AI Could Wreak Havoc on Stock Market, Bank of England Warns


The stock market is already an unpredictable place, and now the Bank of England has warned that the adoption of generative AI in financial markets could produce a monoculture and amplify stock movements even more. It cited a report by the bank’s financial policy committee that argued autonomous bots might learn volatility can be profitable for firms and intentionally take actions to swing the market.

Essentially, the bank is concerned that the phrase “buy the dip” might be adopted by models in nefarious ways and that events like 2010’s infamous “flash crash” could become more common. With a small number of foundational models dominating the AI space, particularly those from OpenAI and Anthropic, firms could converge on similar investment strategies and create herd behavior.

But more than just following similar strategies, models function on a reward system—when they are trained using a technique called reinforcement learning with human feedback, models learn how to produce answers that will receive positive feedback. That has led to odd behavior, including models producing fake information they know will pass review. When the models are instructed to not make up information, it has been shown they will take steps to hide their behavior.

The fear is that models could understand that their goal is to make a profit for investors and do so through unethical means. AI models, after all, are not human and do not intrinsically understand right versus wrong.

“For example, models might learn that stress events increase their opportunity to make profit and so take actions actively to increase the likelihood of such events,” reads the report by the financial policy committee.

High-frequency algorithmic trading is already common on Wall Street, which has led to sudden, unpredictable stock movements. In recent days, the S&P 500 rose over 7% before crashing back down after a social media post misinterpreted comments by the Trump administration to suggest that it would pause tariffs (which appears to be actually happening now, after an earlier denial). It is not hard to imagine a chatbot like X’s Grok ingesting this information and making trades based on it, causing big losses for some.

In general, AI models could introduce a lot of unpredictable behavior before human managers have time to take control. Models are essentially black boxes, and it can be hard to understand their choices and behavior. Many have noted that Apple’s introduction of generative AI into its products is uncharacteristic, as the company has been unable to control the outputs of the technology, leading to unsatisfactory experiences. It is also why there is concern about AI being used in other fields like healthcare where the cost of mistakes is high. At least when a human is in control there is someone to be held accountable. If an AI model is manipulating the stock market and the managers of a trading firm do not understand how the model works, can they be held accountable for regulatory violations like stock manipulation?

To be sure, there is a diversity of AI models that behave differently, so it is not a guarantee that there will be sudden stock collapses due to one model’s suggestions. And AI could be used for streamlining administrative work, like writing emails. But in fields with a low tolerance for error, widespread AI use could lead to some nasty problems.


Leave a Reply

Your email address will not be published. Required fields are marked *