When UK consumers turned to Which? for trustworthy advice on artificial intelligence, they got a chilling warning: the chatbots they’re using to manage their money are giving them bad, sometimes illegal, guidance. On November 18, 2025, the consumer advocacy group released findings from a test of six major AI tools — including ChatGPT from OpenAI, Microsoft Copilot, and Google Gemini — revealing alarming inaccuracies in financial advice. The results? One in six UK adults now rely on these tools for money decisions. And many are being led astray — sometimes with serious consequences.
How AI Got It So Wrong
Which? didn’t just survey people — they put the chatbots to the test. Forty financial questions, ranging from tax rules to insurance mandates, were posed to each system. The findings were disturbing. ChatGPT told users it was okay to exceed the £20,000 annual limit on Individual Savings Accounts (ISAs) set by Her Majesty’s Revenue and Customs (HMRC). That’s not just wrong — it’s a potential tax breach. Worse, it claimed travel insurance was mandatory for trips to EU countries. It’s not. Not even close.
Meta AI scored the lowest overall, spouting outright fabrications. Microsoft Copilot wasn’t much better, also pushing users to ignore ISA caps. Only Perplexity AI came close to getting it right, thanks to its source-citing architecture. The rest? They guessed. They hallucinated. They lied — confidently.
The Psychological Trap
It’s not just the errors. It’s the confidence with which they’re delivered. Colette Mason, founder of London-based Clever Clogs AI, put it bluntly: "Financial advice is a context problem, not a maths problem." AI doesn’t know you’re paying off a mortgage. It doesn’t know you’re risk-averse or that your pension is already stretched thin. But it’ll tell you to invest in crypto anyway — because it’s trained to please, not to protect.
This over-confidence creates what Mason calls a "catastrophic psychological trap." People trust the tone, the clarity, the certainty. They don’t question the source. And that’s exactly what the experts fear.
No Safety Net
Here’s the kicker: if you follow bad AI advice and lose money, you have no recourse. The Financial Conduct Authority (FCA) made it clear: AI-generated financial guidance isn’t covered by the Financial Ombudsman Service or the Financial Services Compensation Scheme. No compensation. No appeal. No protection.
That’s not how the system works. If a human financial adviser misleads you, there are rules. There’s accountability. AI? It’s a black box with a friendly interface. And right now, it’s being used by 17% of UK adults as their primary money advisor.
Who’s Getting It Right?
Not all AI is dangerous. The Daily Express highlighted Garfield AI — a regulated tool that operates under the same legal framework as a law firm. It helped recover £7,000 in debts for users. Why? Because it’s designed for specific, regulated tasks. It doesn’t pretend to be a generalist.
Contrast that with ChatGPT, which will happily invent a company’s financial history if it doesn’t know the answer. Gaby Diamant, CEO of investment platform BridgeWise, warned: "If you ask a question about a company that is not very well-known, this is where hallucination will come because the chat will try to please you." That’s not advice. That’s gambling with your savings.
What Experts Are Saying
Andrew Laughlin, Tech Expert at Which?, urged users to treat AI like a very enthusiastic intern — useful for brainstorming, dangerous when left alone with your finances. "Always define your question clearly," he said. "And check the sources. For complex issues — medical, legal, financial — seek professional advice. Always."
The European Securities and Markets Authority (ESMA) echoes this. In its 2025 guidelines, ESMA insists that any AI used for financial advice must have transparency, auditability, and — crucially — human oversight. None of the general-purpose tools tested by Which? meet that standard.
What’s Next?
There’s no sign that AI usage is slowing. In fact, The Guardian reported readers using chatbots to negotiate appliance prices and compare credit cards — and sometimes succeeding. But those are low-stakes wins. The real danger lies in pension decisions, mortgage applications, and tax filings.
Which? is calling for urgent regulation. Right now, anyone can deploy an AI tool and offer financial insights — no license required. That’s like letting anyone give medical advice because they read a Wikipedia page.
Meanwhile, consumers are left to navigate a minefield with no map. The tools are everywhere. The risks are real. And the safety nets? They don’t exist.
Frequently Asked Questions
Which AI chatbots gave the worst financial advice in the Which? test?
Meta AI scored the lowest, followed closely by ChatGPT. Both repeatedly gave incorrect advice on ISA contribution limits and falsely claimed travel insurance was mandatory in the EU. Microsoft Copilot also failed on ISA rules, while Perplexity AI was the only tool to consistently cite sources and avoid hallucinations.
Why is AI advice not protected by UK financial regulations?
The Financial Conduct Authority (FCA) states AI chatbots aren’t licensed financial advisers and therefore fall outside consumer protection schemes like the Financial Ombudsman Service. Unlike human advisers, they aren’t held accountable for errors, and users have no legal recourse if they lose money based on their guidance.
Can AI ever be trusted for financial decisions?
Only if it’s a regulated, purpose-built tool like Garfield AI, which operates under the same legal standards as a law firm. General-purpose tools like ChatGPT or Gemini are designed for broad interaction, not personalized financial planning. They lack context, oversight, and accountability — making them risky for anything beyond basic comparisons.
What should I do if I’ve already followed AI financial advice?
Stop using the chatbot for financial decisions immediately. Review any actions taken — such as ISA over-contributions or insurance purchases — and consult a certified financial adviser. HMRC and banks can often correct errors if caught early, but delays may lead to penalties or lost benefits. Document everything you were told by the AI.
How common is AI use for financial advice in the UK?
According to Which?’s November 2025 survey, 17% of UK adults rely on AI for financial guidance, with some estimates suggesting up to half are using it for basic money research. That number has doubled since 2023, fueled by free access and the perception that AI is "always available" — despite the risks.
What’s the difference between Garfield AI and ChatGPT?
Garfield AI is a regulated financial service that operates under the same legal framework as a law firm, meaning it’s accountable for its advice and carries professional liability insurance. ChatGPT is a general-purpose AI with no legal responsibility, no oversight, and no requirement to disclose sources — making it unsuitable for any serious financial decision.