Hedge fund’s bots hunt for ‘non-linear’ trade signals

Boutique investment firm Goose Hollow uses LLMs to scrape thousands of news sources, searching for links that others miss

array of radio telescopes

In December 2022, as war in Ukraine was aboil, a Russian-flagged ship docked in Johannesburg, South Africa. Goose Hollow Capital’s generative artificial intelligence models tracking and filtering news events brought this to the boutique hedge fund’s attention. The firm’s response was to eliminate all its exposure to South African assets.

It was another five months before the US ambassador to South Africa would accuse the country of supplying arms to Russia during the docking of the Lady R, in what would become the infamous #LadyRussiagate.

Gaining early insight into developing macro events is just one of the several ways that Goose Hollow has been able to use large language models, or LLMs.

The firm has found that the models can’t do lots of things but can do lots of others, according to its founder and chief investment officer Krishna Kumar. The models might not be capable of formulating a successful investing strategy from scratch, for example, but they are able to provide a kind of robot sounding board that can challenge an investment thesis.

Kumar and his team have discovered the models can also ease the awkwardness that might otherwise stop portfolio managers asking critical questions for fear of appearing ‘stupid’. The firm has used the models to map in rudimentary form which elements in the real economy influence what. And as in the Lady R case, it has found that models can successfully identify events that might signal investment opportunity.

“It is a very useful thinking tool,” Kumar says.

AIs on the prize

Goose Hollow is a small hedge fund of five people founded in 2019, based in Tenafly, New Jersey. The firm has been experimenting with LLMs for the past few years, looking for ways to leverage the technology to enhance the productivity and performance of their team.

Goose Hollow’s work is part of a broader push among hedge funds looking to harness the power of LLMs – a type of generative artificial intelligence. The technology, which is a subset of machine learning, powers advanced chatbots such as ChatGPT.

Other hedge funds have built large language models to help sift information, summarise data, assist with programming queries, or extract sentiment signals even from internal communications.

A recent survey of more than 150 hedge funds found that 86% of firms allowed their staff to use some form of GenAI tools at work.

Kumar initially set out to use LLMs to manage the deluge of information now available; to allocate his and his small team’s attention more efficiently.

“We live in a world where we are bombarded with information,” says Kumar. “What is missing is attention and distillation.”

While some hedge funds look to LLMs for big-picture, sweeping trends, Goose Hollow is taking a different path. It has trained its models to zero in on verifiable facts. To provide the raw material for the models, the firm pipes in a stream of content from more than 8,000 news websites across the world, all of which are free to access.

Events are important because they may have direct or indirect consequences. “We first identified the [Lady R] story in early 2023. It was an anomalous event,” says Kumar. After the US ambassador’s public intervention in May, there was a selloff in South African assets, rates, FX and equities, Kumar says.

By aggregating and filtering new events, Goose Hollow looks to assess whether the consequences are likely to affect market prices. Kumar says the firm’s investment style is to seek out “non-linearity” to catch insights where others may not.

A certain type of investing may look to understand linear relationships: if x moves then y moves. The investor will then bet on that relationship continuing to be the case. “We are actually doing the opposite,” says Kumar. “We’re interested in when those relationships change, because that’s when the most interesting things can happen.”

Alongside news coverage, Goose Hollow has trained the machines to summarise bank research and more recently substacks and podcasts into a simple matrix, all in one place, to help the firm’s portfolio managers prioritise what to spend time reading.

Every day’s a school day

Goose Hollow also employs generative AI as a research co-pilot, to help fill in areas of expertise that the team may lack.

Here Kumar references a tenet of Zen Buddhism – shoshin – meaning to approach topics with a beginner’s mindset. One may be reluctant to ask obvious, basic questions when speaking to an expert. But with LLMs, you are talking to a machine and can be more comfortable embracing that beginner mindset to tackle unfamiliar topics.

Goose Hollow has built an LLM-based tool to generate simple graphs of the causal relationships within segments of the economy. This comes from the idea of system dynamics, which aims to understand the feedback loops impacting a certain sector. The firm instructs the LLM to look at a topic from a systems perspective to find all the different factors positively and negatively interacting within the given ‘system’.

In one example, the LLM was able to identify a risk that a heat wave in East Africa might hit copper prices. Using a flow diagram created by the LLM, Kumar could see that the heat wave would affect countries that depended on hydroelectricity, including Zambia, which is Africa’s biggest copper producer. In extreme weather conditions, Zambian mines were at greater risk of shutdowns due to lack of power.

Though GPT4 type models may not be able to calculate the precise coefficients of impact – analysts would need econometric models for that purpose – the tech can still come up with usable approximates. “It doesn’t have to be exact,” says Kumar. The model is able to rapidly piece together this sort of relational thinking, which investors such as Kumar find helpful.

Lady R ship
Limewrite (https://bit.ly/46Kgl2i)
Russian cargo ship Lady R leaves South Africa’s Simon’s Town Naval Base on December 9, 2022, later sparking a diplomatic incident

Goose Hollow used this same mechanism to understand the outlook for stocks of solar power companies.

The LLM identified a series of negative factors: solar intermittency issues and initial high costs; and some positive factors: advancements and investments coming into solar.

Kumar’s team fact-checked the different factors. But the head start in identifying the relevant variables was useful. “It’s a good starting point, when we don’t have a clue about what factors or variables might be important here,” says Kumar.

From there the firm had a firmer grasp of what were otherwise counterintuitive market moves.

“If you look at solar stocks, they’ve collapsed since last year – why is that?” Kumar says. One school of thought is that Biden’s Inflation Reduction Act and money going towards solar production would be good for solar stocks. Also, production costs have fallen and technology has improved in China. However, the actual installation of solar has not risen accordingly.

Kumar concluded that fall in prices of solar panels is not as big a factor, especially when set against an increase in prices for installing and financing them.

Goose Hollow is working on obtaining more accurate estimates on these sorts of coefficients, so that the firm can derive better predictions.

Other use-cases include a form of scenario analysis, whereby the LLM provides a range of alternate realities or outcomes. Or asking the bot to suggest hypothetical trades based on particular news stories.

Goose Hollow is also looking at developing agents, which are a more sophisticated application of LLMs. The agents would have specialised function, as Kumar explains: “You could have a fundamental analyst, technical analyst, geopolitical analyst and have it analyse a topic or an investment from all those angles.”

Weight training

For tasks including news summarisation and analysis, Goose Hollow uses off-the-shelf LLMs such as Gemini or GPT4. But anything relating to investment decisions or using proprietary data is run on smaller models within the firm’s own network.

Getting the model to identify which news events are important requires training. “It’s a tough problem because what is ‘important’ changes over time,” say Kumar.

To help fine-tune the model, the firm assigns weights or ratings to news articles. “We give it a story and say this story is an importance of seven and this story is an importance of nine and this story is an importance of one,” Kumar says. “It’s not exact, but it’s looking for correlations to previous occurrences of it.”

The firm iterates its prompts to gain more sophisticated or accurate outputs. For example, they will ask the model not just to consider first-order effects, but second-order ones.

One helpful technique, Kumar says, is to adjust the speed at which the model works. By default, LLMs generate responses rapidly. But the results can be patchy.

Kumar has found that by forcing the model to slow down, it produces different or better output. “Whenever you say ‘take a deep breath, take things step by step, take a little break, take a pause,’ it seems to come up with a better response,” says Kumar.

Editing by Alex Krohn

Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.

To access these options, along with all other subscription benefits, please contact info@risk.net or view our subscription options here: http://subscriptions.risk.net/subscribe

You are currently unable to copy this content. Please contact info@risk.net to find out more.

You need to sign in to use this feature. If you don’t have a Risk.net account, please register for a trial.

Sign in
You are currently on corporate access.

To use this feature you will need an individual account. If you have one already please sign in.

Sign in.

Alternatively you can request an individual account here