- Jonty Bloom
- business reporter
If you search for “AI investing” online, you’ll see endless offers to let artificial intelligence manage your money.
I recently spent 30 minutes researching what so-called AI “trading bots” can apparently do to my investments.
Many people have suggested that it can give me advantageous benefits. However, as all reputable financial companies have warned, your capital may be at risk.
Put more simply, you can lose money whether it’s a human or a computer making stock market decisions for you.
However, the capabilities of AI have been hyped in recent years, and one survey conducted in the US in 2023 found that almost one in three investors would be happy to let a trading bot make all their decisions. It is said that there is.
John Allan says investors should be more cautious about their use of AI. He is Head of Innovation and Operations at the British Investment Association, the UK’s industry body for investment managers.
“Investing is very serious and affects people and their long-term life goals,” he says. “Therefore, it may not be wise to be swayed by the latest trends.
“At the very least, I think we need to wait until AI proves itself in the very long term before we judge its effectiveness. And in the meantime, I think human investment professionals have an important role to play.” There will still be a role to play.”
One might expect Alan to say this, given that AI-powered trading bots could put some highly trained but expensive human investment managers out of work. do not have. However, such AI trading is certainly new and comes with its own problems and uncertainties.
First, AI is not a crystal ball and cannot see the future in the same way that humans can. Looking back over the past 25 years, there have been unexpected events that have tripped up the stock market, including 9/11, the 2007-2008 credit crisis, and the coronavirus pandemic.
Second, an AI system is only as good as the initial data and software that human computer programmers use to create it. A little history lesson is needed to explain this issue.
In fact, investment banks have been using basic AI, or “weak AI,” to guide their market selection since the early 1980s. That basic AI will be able to study financial data, learn from it, and make autonomous decisions that will – hopefully – become increasingly accurate. These weak AI systems did not predict 9/11 or even the credit crisis.
Fast forward to today, and when we talk about AI, we often refer to something called “generative AI.” This is a much more powerful AI that can create something new and learn from it.
When applied to investing, generative AI can absorb large amounts of data and make independent decisions. But you can also study data to find better ways to develop your own computer code.
But if this AI was originally fed bad data by human programmers, its decisions could get worse the more code it writes.
Elise Gourier, associate professor of finance at ESSEC Business School in Paris, is an expert in the study of false AI. She cited Amazon’s hiring efforts in 2018 as a prime example.
“Amazon was one of the first companies to get busted,” she says. “What happened is they developed this AI tool to recruit talent.
“So they’re receiving thousands of resumes, so we thought we’d automate the whole process. And basically an AI tool would read the resumes for them and decide who to hire. I was telling you.
“The problem is that the AI tools were trained on employees, and those employees were primarily men, so basically what the algorithm was doing was filtering out all women. .”
According to Professor Sandra Wachter, a senior research fellow in AI at the University of Oxford, generative AI could simply go in the wrong direction and generate false information known as ‘hallucinations’.
“Generative AI is prone to bias and inaccuracy, and can spout false information or fabricate facts outright. Without vigorous oversight, these flaws and illusions cannot be detected. is difficult.”
Professor Sandra Wachter also warned that automated AI systems could be at risk of data leaks and so-called “model inversion attacks.” The latter, simply put, is when a hacker asks his AI a series of specific questions in hopes of revealing its underlying coding and data.
It’s also possible that AI will become less like a genius investment advice engine and more like the stock-picking engine you used to find in your Sunday newspaper. They always recommended buying miner stocks first thing on Monday morning, and miraculously the stocks were always the first to jump in value that day.
Of course, this had nothing to do with tens of thousands of readers rushing to buy the stock in question.
So, despite all these risks, why do so many investors seem so keen to let AI make decisions for them? Some people say they trust computers more than other humans.
“This almost certainly reflects an unconscious judgment that machines are objective, logical, and thoughtful decision-makers, whereas human investors are fallible,” he says. “They may believe that AI never takes a day off and never intentionally tries to game the system or hide losses.
“However, AI investment tools may simply reflect all the thinking errors and bad decisions of their developers.In fact, in the future, when an unprecedented event such as a financial crisis occurs, an AI investment tool may simply reflect all the thinking errors and bad decisions of its developers. The benefits of a unique experience and rapid response may be lost. And the coronavirus pandemic. Few humans are capable of creating AI algorithms to deal with these large-scale events.”
Additional reporting by Will Smale.