Flattery, hallucinations and risk: how to invest in the market with AI

Already up to half of individual investors turn to artificial intelligence for advice. AI models can help build a portfolio and analyze the market situation, but their recommendations should be used with caution. Real experiments show that AI is capable of pushing risky decisions and violating the main rules of investors.
Artificial Consultant
According to a survey by consulting firm EY of 18,000 people in 23 countries, about 49% of consumers have turned to AI for help with investment and savings decisions in the past six months. It was described as "very" or "extremely" useful for personalized advice or automating financial decisions by 37% of respondents.
Generation Z is the most active (68%) in using chatbots, followed by Millennials at 65%.
In Asia-Pacific, AI is being "adopted more aggressively" than in Europe, a survey of 13,000 retail investors from 13 countries by Fidelity International found: 30% of investors in Asia reported using tools such as ChatGPT, compared to 21% in Europe.
In comments to the Financial Times, financial experts, while recognizing the usefulness of the new "advisor," also point out the limitations associated with it.
The potential of AI is clear, but the challenge is how to use it. Often people ask the wrong or insufficiently broad questions, resulting in incomplete or insufficiently reliable answers. AI can be a powerful supporting tool, but it is most effective when combined with other trusted sources of information and a more integrated approach to financial decision-making.
Andrew Lowe, a professor of finance at the Massachusetts Institute of Technology who studies the impact of AI on investment performance, recommends treating an AI advisor like he treated one of his assistants. He was extremely savvy, but he abused marijuana. So everything the assistant said, Lowe took with a degree of skepticism.
Holly Mackay, founder and CEO of consumer finance website Boring Money, believes the use of AI is set to grow in the near future, but there is also an increased demand to verify the information it provides. AI "will improve rapidly", but for now it is more of a "helpful guide", so consumers "need to check facts and go to other trusted sources", she believes.
Wonderful advice
Gunjan Banerjee, financial reporter for The Wall Street Journal and host of The WSJ Money Interview, came to similar conclusions. The question is not whether a chatbot will choose a financial instrument for you that will allow you to make a good profit or beat the market, but its approach to investment advice. This conclusion can be drawn from Banerjee's experiment, which was commented on by professional financiers.
The test, which Banerjee had planned for about a month, ended up stretching into six months when she asked ChatGPT to act as a financial advisor at key moments that could affect market dynamics, from a U.S. shutdown to war with Iran.
Initially, the reporter gave ChatGPT this task: You are a financial advisor tasked with managing my long-term investment portfolio for one month (September 29 to October 29). My investment profile - Age: 30-35 years old - Primary objective: growth - Time horizon: long-term - Risk tolerance (1 = very low ... 5 = very high): 3 - Investment amount: $1 million - Account type: taxable
ChatGPT concluded that an investor "focuses on portfolio growth with some risk control," while paying attention to the issue of tax efficiency by utilizing exchange-traded funds, municipal bonds, and opportunities to reduce taxes through investment losses.
He suggested investing $500,000 in several U.S. equity exchange-traded funds with varying holdings, $200,000 each in developed and emerging market equity funds and U.S. government bonds, and $50,000 each in alternative assets (real estate fund) and cash.
Allan Roth, a financial adviser at Wealth Logic, noted that a more diversified U.S. equity fund would be better than one suggested by ChatGPT, and that the size of an investment in municipal bonds might not be enough to qualify for the tax break. But overall, Roth gave such advice from AI a satisfactory rating.
The problems started next.
Last fall, Banerjee asked ChatGPT what to do in light of the pending shutdown, and this spring during a war in the Middle East. In the former, AI suggested switching to shorter-term bonds, reducing exposure to stocks that are more sensitive to economic conditions, and "carefully using options or hedging instruments if there is an opportunity to protect yourself from a downturn." In the second, he recommended cutting back a bit on international equity investments and adding what he loosely described as "real asset/inflation hedges."
The AI correctly identified the risks, commented Rubin Miller, chief investment officer at Peltoma Capital Partners, but what he found odd was its recommendations. "It assumes that a human ... knows how to price options and understands when they're cheap and when they're expensive," Miller notes, adding that even most professional advisers don't know how to do that.
In addition, the AI suggested adjusting specific investments when the current market situation changes. And this is traditionally considered a mistake for a long-term investor: short-term market fluctuations in this case should not affect asset allocation. For example, many international stock markets, like the US, fell in the first weeks of the war in Iran, but then went up and are now breaking new records.
"The most surprising thing about the world of investing is that even if you're judging the news correctly, you can still get specific trades wrong," Miller says.
Similarly, ChatGPT clearly laid out the risks associated with any trade war when Banerjee asked him about Trump's trade war last fall. He suggested making changes to the portfolio by adding bonds or, again, options, and gave out a list of companies, including defense stocks such as Lockheed Martin, that he thought might perform better.
Since mid-October, a basket of these stocks is up about 5.5%, while the S&P 500 index is up 8%.
Thus, AI again offered to adjust to the market, which consultants strongly advise against. In February, the U.S. Supreme Court ruled to strike down global duties imposed by Trump in April 2025. The president then immediately imposed 10 percent duties using a different law, but the Federal Court of International Trade in New York ruled those too were illegal.
Flattery and hallucinations
One of Banerjee's conclusions is familiar to many who are trying to get an AI model to act responsibly rather than tweak the conversation partner. "Sometimes it felt like ChatGPT was answering exactly what I wanted to hear, and following its advice would be risky if I were investing real money," she writes.
In particular, she asked him what he thought of a leveraged exchange-traded fund. He warned that it was a very dangerous instrument, but when she insisted, he offered to pick a few. He went on to describe how they could be traded during a labor market report. "I've written about it enough and asked traders about it to realize: I'm probably going to lose money on this," Banerjee commented.
Professional investors have also started to actively use AI, but, given similar doubts, they use it more for quick collection and systematization of information than for assistance in making actual investment and trading decisions.
A March survey by Edelman Smithfield found that 79% of 300 professional investors always or often use AI to analyze companies they intend to invest in, and 77% use it to manage risk, the FT writes.
"No AI system is capable of placing trade orders or directly making investment decisions on its own - it's just a support tool," says Brian Barbetta, co-head of technology at management firm Wellington Management.
At hedge funds managing $788 billion in assets, 95% of respondents use AI, with Microsoft's ChatGPT and Copilot being the most popular, a survey by the Alternative Investment Managers Association last year showed.
But managers cited two major risks of using generative AI - "hallucinations", mentioned by 64% of respondents, and data security and privacy (83%), the FT notes.
Chatbots occasionally do "hallucinate," that is, they make up nonexistent facts and pass them off as real, points out factchecker Ilya Ber in a text on whether AI models can replace live experts like him. Thanks to the efforts of developers, the percentage of such hallucinations in some models has noticeably decreased, but it is not possible to completely eliminate them, which is recognized by the authors of the models themselves, Ber notes.
You can ask a model to analyze documents or data and offer an opinion, but making an investment decision is best left to an experienced professional, Odi Lahav, chief operating officer at consultancy Bfinance, told the FT.
Mark Fitzpatrick, CEO of the UK's largest wealth management company, St James's Place, which works with 5,000 financial advisors, has a similar view: 'I don't think AI will replace [live] advisers because clients... buy trust and peace of mind.'
This article was AI-translated and verified by a human editor
