Device Insider

Device insiders

Would ‘trading bots’ powered by AI change the investing industry?

Trading bots

You will be flooded with promises to let artificial intelligence manage your money if you search for “AI investing” online.

I just took thirty minutes to research the alleged capabilities of “trading bots” using artificial intelligence for my investments.

Regardless of whether these bots make their decisions with the assistance of humans or computers, trustworthy financial companies caution against investing in these bots despite their many claims of great returns.

A US survey conducted in 2023 revealed that almost one-third of investors would be OK with a trading bot handling all of their financial choices, despite the recent hype surrounding artificial intelligence.

Investors should use caution when using AI, according to John Allan, head of innovation and operations at the UK’s Investment Association. He highlights the gravity of investment, which might affect people’s long-term objectives, and advises delaying the adoption of AI until it has had time to demonstrate its efficacy. He does, however, agree that, for the time being, human investment professionals will still be quite important.

It makes sense that Mr. Allan would express concerns about the possibility of expensive, highly qualified human investment managers being replaced by trading bots powered by artificial intelligence. However, trading with AI is still relatively new and comes with some unknowns and difficulties.

First of all, AI is not able; it cannot predict the future more accurately than humans. When we look back over the last 25 years, we can see that even highly developed AI systems are unable to predict unexpected factors like 9/11, the COVID-19 pandemic, and the 2007–2008 credit crisis.

Elise Gourier, an expert in the study of AI, went wrong. She is an associate professor of finance at the ESSEC Business School in Paris. She uses Amazon’s hiring practices from 2018 as excellent proof.

“Amazon was one of the first companies to get caught out,” she claims. “They created this artificial intelligence technology to hire people.

They were under the impression that the thousands of resumes they were receiving would be processed entirely by automation. In essence, the AI program was screening resumes and recommending candidates for employment.

The issue was that the AI tool was trained on the team, which consisted mostly of men. As a result, the algorithm was essentially eliminating all of the women.

A “hallucination” is when generative AI produces false information due to a simple miscalculation, according to Prof. Sandra Wachter, an Oxford University senior research fellow in AI.

“Generative AI can provide false information or entirely invent facts; it is exposed to effects and mistakes. It is difficult to recognize these defects and symptoms without exact errors.”

Additionally, Prof. Sandra Wachter warns that “model inversion attacks” or data leaks could potentially compromise automated AI systems. To put it simply, the latter is when hackers pose a series of targeted inquiries to the AI in the hopes that it will release the underlying data and coding.

Additionally, AI may start to look like the stock pickers you used to see in Sunday newspapers, rather than being a brilliant investment advice engine. Every Monday morning, they would always advise buying a small number of shares, and, amazingly, the value of those shares would increase right away.

Naturally, this had nothing to do with the millions of readers who were all hurrying to purchase the claimed part.

Why, then, do a large number of investors seem ready to let AI make decisions for them despite all these risks? According to Stuart Duff, a business psychologist at Pearn Kandola, some people just have a higher level of trust in computers than in other people.

It’s most likely a faulty acceptance that machines make objective, reasoned, and measured decisions while human investors are victims of failure. “According to him,” they might think AI is perfect, that it never makes mistakes, or that it never tries to cover up losses.

However, an AI investment tool can just be a reflection of all the bad decisions and faulty reasoning made by its creators. Furthermore, if future unheard-of catastrophes like the financial crisis and COVID occur, it might lose the advantage of its direct expertise and quick response. Very few people are capable of developing AI algorithms to handle such huge situations.

Leave a Comment

Your email address will not be published. Required fields are marked *

Leave the field below empty!

Scroll to Top