Pension savers are increasingly turning to artificial intelligence (AI) tools rather than traditional financial guidance websites when they begin thinking about retirement, according to analysis by PensionBee.

The personal pension provider cited data from search marketing platform Semrush, which suggested that traffic to the government-backed MoneyHelper website had fallen by 10% over the past six months.
Over the same period, there has been a steady rise in Google’s “AI Overviews” generated from MoneyHelper’s content, indicating that savers are getting answers directly from search results rather than clicking through to source material.
OpenAI has itself identified retirement planning as a growing use case. When asked how people approaching retirement are using ChatGPT, the tool responded: “Retired people and those nearing retirement are quietly but meaningfully using ChatGPT and similar AI tools as a thinking partner, not a replacement for regulated advice.”
“In the face of continued improvements to AI technology, the retirement industry faces a serious challenge of remaining relevant and trusted.”
Luis Meija, PensionBee
It cited sense-checking retirement decisions, modelling informal “what-if” scenarios, translating pension jargon, raising tax awareness, co-ordinating retirement income and seeking reassurance as key uses.
The trend indicates a widening split between generic guidance and more tailored services. While informational sites may be vulnerable to so-called “zero-click” searches, traffic to Pension Wise, which offers free, bookable guidance appointments for over-50s, has not seen the same decline, PensionBee said.
Luis Mejia, head of data and AI at PensionBee, said: “As many of us have experienced, AI is a generally good substitute for some financial guidance, but advisory services are better protected.
“In the face of continued improvements to AI technology, the retirement industry faces a serious challenge of remaining relevant and trusted while savers increasingly rely on AI for more complex guidance and even personalised advice.”
The Financial Conduct Authority (FCA) is examining AI’s impact on financial services under the Mills Review, contributions for which are invited until 24 February. The regulator has also established an AI consortium with the Bank of England.
Meanwhile, the Treasury Committee has also published a report on AI in financial services, signalling that policymakers are watching closely as savers increasingly experiment with digital decision-making tools.
Consultancy warns of poorly-sourced AI responses
PensionBee’s findings follow a warning from communications consultancy Quietroom that many AI responses about specific pension schemes are misleading and often sourced from the websites of unrelated schemes.
Quietroom cited previous studies that had found around 70% of people using Google accept its AI-generated overviews at face value without checking sources – even though these are provided by Google.
“Members are no longer reading what their scheme has written – they’re reading what AI tools serve up, which may or may not be accurate. And they’re asking AI to give them key points and what decision they should make.”
Simon Grover, Quietroom
The consultancy tested different AI tools, including ChatGPT, OpenAI’s Operator, and Google, and found significant issues with generated responses. Often, the tools struggle to interact with scheme websites and cannot access the correct information. However, it will still generate an answer and may not inform the user that it has not been able to read the correct scheme’s site.
In addition, some tools were shown to have generated incorrect responses to questions because they gathered information from other pension schemes’ websites. As all schemes are different, the generated answers were incorrect.
Simon Grover, director at Quietroom, said: “Members are no longer reading what their scheme has written – they’re reading what AI tools serve up, which may or may not be accurate. And they’re asking AI to give them key points and what decision they should make.”
Quietroom highlighted that large language models (LLMs), the technology that powers most AI tools, often miss information that is hidden in the middle of documents. It also explained that AI tools “cannot distinguish between different cohorts of members”, which can also lead to inaccuracies.
Grover argued that the issue was often the result of overly complex documents and text, and contended that “clear, consistent, well-structured” content will make it easuer for AI tools to give accurate summaries.









