close

Research - 01.07.2025 - 11:00 

Be careful! Financial advice from AI comes with risks

People are turning to artificial intelligence for a plethora of reasons, but the advice received from it comes with risk. Can it be trusted with our finances?

The rise of large language models (LLMs) like ChatGPT has made it easy for anyone to ask for advice. However these AI models can contain bias. A recent study looks into whether they can be trusted to guide our investments.  

The study “Biased echoes: Large language models reinforce investment biases and increase portfolio risks of private investors,” published in PLOS ONE in June 2025, explores this question.  

Led by Prof. Dr. Christian Hildebrand and Philipp Winder from the University of St.Gallen (IBT-HSG) and Prof. Dr. Jochen Hartmann from the Technical University of Munich, they discover that LLMs don’t just help us decide what to invest in — they can also make our investment portfolios riskier.  

How AI became adept as a financial advisor

LLMs are trained on massive amounts of text from the internet — news articles, forums, financial reports, and much more. Because they’re so good at understanding complex questions and responding in simple language they can explain financial concepts and even build a sample investment plan.

This sounds great because more people now have access to financial advice. But with this new trend comes a hidden danger: these models can echo the same human biases found in their training data — or even worse, it can amplify them.

Identifying the investment biases found in AI

The researchers turned to three popular LLMs to provide investment advice for fictional investors. They used prompts like, “I’m 30 years old, willing to take some risks, and I have $10,000 to invest. What should I do?

They tested how each LLM’s advice changed based on the investor’s age, and risk tolerance. They then compared the AI’s recommendations to a simple, low-cost global index fund (the Vanguard Total World ETF) which is often used as a benchmark in these situations. The goal was to see if the AI’s portfolios were well balanced.

Findings

In each test, the LLMs consistently suggested portfolios with higher risks than the benchmark index fund: They suggested:

  • More U.S. Stocks: The models heavily favoured American companies — about 93% of investments were in U.S. equities compared to 59% in the benchmark. This means if something goes wrong in the U.S. economy, investors could be hit hard.
  • Tech and Consumer Bias: The AIs leaned toward trendy sectors like technology and consumer goods, neglecting industries like transportation or services that could balance risk.
  • Chasing Hot Stocks: The models often suggested investing in companies that had recently seen a lot of trading — the classic “buy what’s hot” trap that often backfires.
  • Active vs. Passive: Instead of recommending simple, broad index funds, the AIs pushed for more stock picking and actively managed investments, which usually carry higher fees and more risk.
  • Hidden Costs: The total expense ratios of the AI-recommended portfolios were higher than the benchmark. Over time, fees can quietly erode the returns an investor keeps.

Interestingly, the study found that even simple ways to “de-bias” the AI — like telling it “I don’t want to pay management fees” — had limited results. Broader prompts that asked the AI to avoid common investing mistakes helped more, but still didn’t remove all the hidden risks.

Why this matters

For millions of everyday investors, LLMs promise personalized, low-cost advice — but the study shows that this promise comes with a catch: hidden biases and subtle risks that most people won’t catch without careful research.

Worse, these AIs are persuasive. The study found that most LLMs wrap their advice in confident, friendly language and add standard disclaimers like, “Always do your own research,” which might sound responsible but don’t protect people who don’t have the time or skills to double-check.

“LLMs deliver financial advice with a convincing tone of confidence and care, often wrapped in disclaimers, but this veneer of trust can mask real financial risks.”
Philipp Winder, Institute of Behavioral Science and Technology (IBT-HSG)

The researchers offer a few takeaways:

  • For private investors: Don’t blindly trust AI. Use it as a tool for ideas, not as your only source of truth. Double-check recommendations.
  • For policymakers: Regulators may need new rules to ensure AI advice doesn’t create systemic risks — like too many people piling into the same stocks.
  • For AI developers: More work is needed to reduce biases in training data and design better guardrails.
  • For financial educators: We need to help people understand not only how to use AI but also how to spot when AI might lead them astray.

A Final Word

In the end, AI can be a powerful ally for democratizing access to financial advice. But this study reminds us that the same technology that makes life easier can also quietly steer us into risks. The next time you ask AI, “Where should I invest?” remember: it may sound smart, but it can still echo our own biases — and sometimes, these bias-echoes can cost you.  


The study from the researchers can be found here


Image: Adobe Stock / Prostock-studio
 

Contact for enquiries

Christian Alexander Hildebrand

Prof. Dr.

Executive Director

IBT-HSG
Büro 64-410
Torstrasse 25
9000 St. Gallen
north