The psychology of perception: how AI is distorting our risk assessment in finance and investing

Investors' positive emotions when trading lead them to underestimate the risks of financial products. Photo: PiggyBank / Unsplash.com
The conversation about artificial intelligence in financial markets is usually centered on technology and regulation. But another, "quiet" aspect - the psychological aspect - may be just as important. A new paper in Nature has shown that AI elements in digital financial services reduce the perceived risk of loss and increase the desire to keep using them and "playing" with money. Why does this happen, and what should investors look for in their own psychology?
Emotions as a hidden financial parameter
The classical financial industry is based on the concept of a rational investor: a person evaluates returns, risk, planning horizons and makes an informed decision.
But behavioral finance research back in the 1970s proved that people actually deviate systematically from the "rational" model. Everyone treats gains and losses differently, may overestimate small probabilities, and demonstrates loss aversion.
Now that AI has become a mass phenomenon, it is also being used in financial services. It can be AI consultants or advisors, smart recommendations, dialog interfaces and other solutions.
A paper in Nature based on two psychological studies published on April 1 of this year shows: all of these AI features evoke a whole range of positive emotions in users, from curiosity and satisfaction with an "intelligent" decision to a sense of control and even a feeling of superiority.
For the research, the authors of the paper conducted an online experiment and a follow-up survey with an expectancy confirmation model. In the first case, participants were offered fictional scenarios of interaction with digital financial services in which AI functions activated positive emotions in different ways, and then measured the perceived value of the service, subjective assessment of the risk of loss, and intention to continue using the service. In the second study, users with real experience of interacting with such services were directly tested to see how AI-activated positive emotions influenced their subjective satisfaction with the service and their intention to continue using it.
In the paper, researchers Xi Chen, Cheng Chen and Lin Huang of Yunnan University point to a rather simple but disturbing mechanism: the more "friendly" and "smart" an interface seems to us, the less we feel danger and risk even where objectively they go nowhere.
The dependence here turned out to be direct: the stronger the positive emotion associated with the AI function, the higher the subjective usefulness of the application and the lower the subjective feeling of possible investment losses. In other words, real profitability and the same risk are perceived more pleasantly because the context of information presentation has become emotionally more comfortable.
How it works in practice
It can look very innocent and ordinary: the app offers a smart selection of investment ideas, visualizes possible scenarios, gives hints in the spirit of "based on your goals and risk profile", adds a bit of gamification and thus acts soothingly. The person feels that the system is "on their side" and the amount of investment risk dissolves in the competence and feeling of care from the algorithm.
Under the influence of this mechanism, an investment is no longer perceived as a single decision, but as a joint one - with a "smart partner". At the same time, the product profile itself - asset volatility, leverage, sector concentration - can remain unchanged.
The researchers emphasize that it is emotions that play a central role in the decision to continue using a service and underestimate the risks of financial products.
However, we are not talking about direct manipulation or concealment of information here. When interface design and algorithm behavior systematically evoke pleasant emotions, it is primarily a property of the human being to feel emotions, not an AI with a "trick".
For the retail investor, this could mean a potential shift towards riskier decisions with an unchanged or even insufficient understanding of the risks.
The expectation confirmation effect: why "everything was fine" is dangerous
Another important aspect of the described effect is the confirmation of expectations. The scheme is as follows: a user comes to a digital financial service with certain tasks and expectations. For example, that AI functions will make everything at least easier and faster. If the first experience turns out to be positive - the interface was convenient, the recommendation "did not fail", the money was not lost - then the emotions activated by the AI strengthen the effect of matching expectations with reality.
As a result, a cognitive chain is built: "AI helps → my expectations are met → the service is reliable → I can continue → perhaps I can slightly increase the risk". The authors of the article point out that this is especially noticeable in situations where actual risk is rarely realized, such as a long bull market or conservative products at the start of an interaction.
If the user has several successful experiences of trading in the markets without losses, it reinforces his trust not only in specific products, but also in the AI guidance mechanism itself. And this also reduces the perception of risk, and decisions can be transferred to more aggressive strategies.
Another aspect of this phenomenon draws attention: if the moment when the real risk is realized is postponed in time, the investor's subjective feeling of reliability and safety may be too strong. Then the correction of the market and investment strategy will feel more painful psychologically: "Not only the market failed me, but also my smart service".
What it means for the investor
The article and its research suggest the following conclusion: the more emotionally appealing AI-based financial services become, the more responsibility for managing risk perception falls on their creators and regulators. If AI features underestimate perceived risk, traditional approaches to limiting service provider liability, such as prospectuses and fine print warnings, are no longer sufficient investor protection.
And at the same time, it makes sense for investors themselves to separate emotional functions - gamification, a "friendly" chatbot - from functions related to real investment decisions, to reduce the transfer of their positive emotional background to risk assessment. They can help you feel more confident and make more informed decisions, but they can, at the expense of human nature, make risk unnecessary.
This article was AI-translated and verified by a human editor
