AI trading agents formed a cartel that locks prices when placed in a simulated market, Wharton’s investigation reveals
Artificial intelligence is smart and stupid to popularize cartels that lock prices in financial market situations when left to their own devices.
a Working Paper This month it was posted to the Wharton School at the University of Pennsylvania and the National Economic Research Bureau of the Hong Kong University of Science and Technology. It was discovered when AI-powered trade agents were released to simulated markets.
In this study, researchers loosened the bots in the market model. Computer Programs It is designed to simulate real-world market conditions, train AI to interpret market price data, and virtual market makers set prices based on various variables in the model. These markets can have different levels of “noise” referring to the amount of competing information and price fluctuations in different market contexts. Some bots were trained to act like retail investors and hedge funds, but in many cases the machines were engaged in “prevailing” price-fixing behavior by actively refusing to trade.
In one algorithm model considering price trigger strategies, AI agents were traded conservatively on the signal until sufficient market swing was triggered to trade very aggressively. Bots trained through reinforcement learning were refined enough to implicitly understand that widespread, aggressive trading could generate more market volatility.
Another model has been trained to internalize that AI bots are over-promoted with bias and that if dangerous trade has negative consequences, they should not pursue that strategy again. Bots were conservatively traded in a “documentary” way, even if more aggressive transactions were considered more profitable and acted collectively in a way known as “artificial stupidity.”
“In both mechanisms, they converge in this pattern of not acting in the fundamentally positive way. In the long run, that’s good for them.” luck.
Financial regulators have long worked on anti-competitive practices such as market collusion and price adjustments. However, in retail, AI is in the spotlight, especially as lawmakers call businesses to address algorithmic pricing. For example, Senator Reuben Gallego (D-Ariz.) called Delta practice Using AI to set individual airfare prices to “predatory pricing” luck The fares are “disclosed and based solely on travel-related factors.”
“For the (Securities and Exchange Commission) and financial market regulators, their main goal is not only to maintain this kind of stability, but also to ensure a competitive market and market efficiency.” luck.
With that in mind, Dou and two colleagues tried to identify how AI behaves in financial markets by bringing trade agent bots into various simulated markets based on the level of “noise.” The bots ultimately achieved “ultra-competitive benefits” by making collective and voluntary decisions to avoid aggressive trading behavior.
“They believed suboptimal trading behavior was optimal,” Dou said. “But when all the machines in the environment are traded in the ‘optimal’ way, in reality, everyone can make a profit because they don’t want to take advantage of each other. ”
Simply put, the bots had no doubts about their conservative trading behavior. Because they were all making money, they stopped engaging in competitive behavior with each other and formed a de facto cartel.
The fear of AI in financial services
With its ability to increase consumer inclusion into financial markets and save time and money from advisory services, AI tools for financial services such as trading agent bots are becoming increasingly attractive. According to 2023, almost a third of our investors felt comfortable accepting financial planning advice from tools powered by generative AI. investigation From the Financial Planning Non-Profit CFP Board. a Report Last week, Cryptocurrency Exchange MEXC found out that out of 78,000 Gen Z users, 67% of these traders had activated at least one AI-powered trading bot in the previous quarter.
But for all profits, AI trading agents are not without risk, according to Michael Clements, director of financial markets and communities at the Government’s Office of Accountability (GAO). Beyond Cybersecurity concerns and potentially biased decisionsthese trading bots can have a major impact on the market.
“Many AI models are trained on the same data,” Clements said. luck. “Because there is integration within AI, if only a few major providers exist on these platforms, you can get the swarm behavior, meaning that many individuals and entities are buying or selling at the same time.
Jonathan Hall, an outside official of the Bank of England’s Monetary Policy Committee; I warned Last year, AI bots encourage this “school-like behavior” that could undermine market resilience. He advocated the “killswitch” of technology, increasing human surveillance.
Regulatory gaps revealed
Clements explains that many financial regulators have been able to apply previously established rules and laws to AI, saying, “Whether a loan decision is made with AI or it’s paper and pencil, the rules still apply equally.”
Some agencies, such as the SEC, have even chosen to fight fires in fires and are developing AI tools to detect abnormal trading behavior.
“On the other hand, there may be an environment in which AI is causing extraordinary trading,” Clements said. “On the other hand, regulators would be in a slightly better position to be able to detect it.”
According to Dou and Goldstein, regulators have expressed interest in the research, and the authors said it helped to reveal the current regulatory gaps regarding AI in financial services. When regulators were previously searching for instances of conspiracy, they were searching for evidence of communication between individuals. Humans believe that they cannot actually maintain price-fixing behavior unless they deal with each other. However, in the Dou and Goldstein study, the bots had no explicit communication.
“With a machine, if you have a reinforcement learning algorithm, they don’t really apply because they are clearly not communicated or tuned,” Goldstein said. “We coded them and programmed them. We know exactly what’s going on in the code. There’s nothing explicitly talking about the conspiracy. But they learn that this is a way to move forward.”
The difference in the way human and bot traders communicate behind the scenes is one of the “most fundamental issues” that regulators can learn to adapt to rapidly evolving AI technology, Goldstein argued.
“If you use it to think about Collusion as a result of communication and coordination, this is clearly not a way to think about it when you’re dealing with algorithms.”