AI expert warns of algo-based market manipulation

Regulators must keep pace with new technology that could be used to control financial markets

Computational Finance Concept - Branch of Computer Science Applied to Finance and Trading - An Abstract Central Processing Unit Surrounded by Financial Charts on Technology Background

“The prospect of supercharged manipulation is likely,” says Michael Wellman, a University of Michigan professor. “We know manipulation is already a very prevalent practice, and so if bodies are doing this, they’re going to try and avail themselves of the latest tools. We would be foolish not to expect to see intentional manipulation enhanced by AI.”

Wellman, who earned his PhD in artificial intelligence from MIT in 1988, knows what he is talking about. He has spent his career researching AI and its applications in economics.

As a result of AI already being deeply integrated into the financial services sector and new advancements within artificial intelligence happening more and more frequently, Wellman says while the innovative potential is clear, there is also the potential for market abuse. 

We would be foolish not to expect to see intentional manipulation enhanced by AI
Michael Wellman, University of Michigan

His views are based on an experiment conducted by Wellman and his team at Michigan, in which they built a simple spoofing algorithm alongside a benign market-making trade algorithm, and built a detector that could tell the difference between the two algos. Both algos submitted, changed and cancelled orders, and it was easy to detect which was which – that is, until the team had the spoofing algorithm attempt to evade detection by acting more like a market-maker. By learning how to avoid detection, spoofing algorithms can learn to manipulate markets by mirroring non-threatening algorithmic trading systems. This sets a precedent for intentional, AI-led market disruption. 

Wellman isn’t some random academic speaking in abstracts – he had worked in the e-commerce wave of the early 90s; but in 2008, the global financial crisis spurred him into understanding the importance of the “big black box” of finance. From then, he decided to focus on the financial markets, intrigued by people within finance asking for more computer scientists to work on computational phenomena at the intersection of new technology and financial problems. His work in shaving milliseconds and microseconds off response time in the latency arms race of high-frequency trading firms changed the effective dynamics of trading as a concept, and cemented Wellman’s place within the finance world. Then he moved on to studying market manipulation using his knowledge of AI. 

“I’ve been around for a while, and it reminds me of what we were like with respect to the internet 30 years ago,” Wellman says. “Whatever you were doing, it was necessary to ask: ‘How does the web and the internet change how we do business?’ and of course, it did change things quite pervasively. AI might not play out the same way, but it’s an echo.” 

The problem is that any advance in detection immediately could be exploited by a manipulator to evade detection
Michael Wellman

That may be true, but it’s an echo with the potential to do serious damage. On January 8, Wellman presented to the Commodity Futures Trading Commission’s Technology Advisory Committee (TAC) on the subject of responsible AI usage in the financial markets. Throughout his 20-minute presentation, he highlighted that while AI could be used by regulators looking to detect cases of market manipulation, it could also be used by bad actors to enact manipulation, forcing both parties into a technological “arms race”.

Wellman says training algorithms to report cases of market manipulation by other algorithms sets up an adversarial learning situation, wherein both algorithms learn the techniques of each other and attempt to obfuscate their goals, to increase their respective chances of success. 

“Whenever you have a machine-learning approach to try to detect adverse behaviour, you get into a kind of arms race, which is called adversarial learning,” Wellman told the TAC. “In this case we have a race between a detector and a manipulator. The detector looks at behaviour, classifies it as being manipulative or not, and the would-be manipulator is trying to manipulate but also trying to evade detection. The problem is that any advance in detection immediately could be exploited by a manipulator to evade detection.”

Wellman likens this interaction to spam email bots in the early 2000s. Initially, bots ran rampant, clogging up email inboxes with spam, phishing links, or unrelated junk email and taking advantage of the lax security systems, but when email providers toughened up their spam detection software and spam’s prevalence declined, it got smarter. 

“When spam filters came in, some of the spammers started obfuscating their messages, spelling things wrong, and putting weird symbols to get around the detectors,” Wellman explains. “That sometimes works, but then the messages would be weirder and less effective as spam.” 

In response, regulators and exchanges are tooling up to combat potential manipulation, but their efforts may not be enough. “They’re really working hard to improve their detection. This arms race is without a doubt going to be playing out – and it's not clear which side is going to have the long-term upper hand in that,” he says. 

Aside from regulators and exchanges, some trade surveillance companies are already investigating the idea of using machine learning to mess with the markets. Surveillance and compliance specialist Nice Actimize has tested out machine learning models to improve its Surveil-X surveillance and analytics platforms, while Nasdaq’s Smarts platform has integrated artificial intelligence into its detection software. 

Wellman believes the advent of these new technologies that could result in becoming locked in an arms race death spiral requires significant structural changes to the market to ensure a robust defence against manipulation, such as increases – or perhaps decreases – in transparency. 

“Paradoxically, it could involve more transparency or in some respects, less. Let’s say people cannot see the order book, well then no-one can manipulate the order book because no-one will be misled by the information there. You can’t spoof a dark pool,” Wellman says.

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

The changing shape of risk

S&P Global Market Intelligence’s head of credit and risk solutions reveals how firms are adjusting their strategies and capabilities to embrace a more holistic view of risk

Most read articles loading...

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here