Research - 01.07.2025 - 11:00
The rise of large language models (LLMs) like ChatGPT has made it easy for anyone to ask for advice. However these AI models can contain bias. A recent study looks into whether they can be trusted to guide our investments.
The study “Biased echoes: Large language models reinforce investment biases and increase portfolio risks of private investors,” published in PLOS ONE in June 2025, explores this question.
Led by Prof. Dr. Christian Hildebrand and Philipp Winder from the University of St.Gallen (IBT-HSG) and Prof. Dr. Jochen Hartmann from the Technical University of Munich, they discover that LLMs don’t just help us decide what to invest in — they can also make our investment portfolios riskier.
LLMs are trained on massive amounts of text from the internet — news articles, forums, financial reports, and much more. Because they’re so good at understanding complex questions and responding in simple language they can explain financial concepts and even build a sample investment plan.
This sounds great because more people now have access to financial advice. But with this new trend comes a hidden danger: these models can echo the same human biases found in their training data — or even worse, it can amplify them.
The researchers turned to three popular LLMs to provide investment advice for fictional investors. They used prompts like, “I’m 30 years old, willing to take some risks, and I have $10,000 to invest. What should I do?”
They tested how each LLM’s advice changed based on the investor’s age, and risk tolerance. They then compared the AI’s recommendations to a simple, low-cost global index fund (the Vanguard Total World ETF) which is often used as a benchmark in these situations. The goal was to see if the AI’s portfolios were well balanced.
In each test, the LLMs consistently suggested portfolios with higher risks than the benchmark index fund: They suggested:
Interestingly, the study found that even simple ways to “de-bias” the AI — like telling it “I don’t want to pay management fees” — had limited results. Broader prompts that asked the AI to avoid common investing mistakes helped more, but still didn’t remove all the hidden risks.
For millions of everyday investors, LLMs promise personalized, low-cost advice — but the study shows that this promise comes with a catch: hidden biases and subtle risks that most people won’t catch without careful research.
Worse, these AIs are persuasive. The study found that most LLMs wrap their advice in confident, friendly language and add standard disclaimers like, “Always do your own research,” which might sound responsible but don’t protect people who don’t have the time or skills to double-check.
The researchers offer a few takeaways:
In the end, AI can be a powerful ally for democratizing access to financial advice. But this study reminds us that the same technology that makes life easier can also quietly steer us into risks. The next time you ask AI, “Where should I invest?” remember: it may sound smart, but it can still echo our own biases — and sometimes, these bias-echoes can cost you.
The study from the researchers can be found here.
Image: Adobe Stock / Prostock-studio
Executive Director