Skip to content

Advent of GenAI is driving mistrust in the media, TFD research reveals

  • Over half of UK consumers say they trust the media less because of AI
  • However inverse is true for GenZ, with 35% trusting media more 

The widespread use of generative AI platforms is driving distrust in the media, according to research today announced by deep tech communications agency, TFD (Think Feel Do)

The Study, which polled 2000 UK consumers in March 2025 and was conducted by OnePoll, revealed that over half (53%) of UK consumers believe their trust in the media (defined as print and digital news, social media and broadcast) has declined as a direct result of genAI tools such as ChatGPT, Gemini and Claude. The sharpest decline in trust (59%) was reported in the over-65 age group.

More surprising is the reported increase in trust in the media among certain demographics:

  • 35% of 18 to 24 year olds and 25 to 34 year olds reported that their trust in the media had increased either slightly or significantly after the mainstream adoption of generative AI
  • 15% of overall respondents also reported increased trust.

This is perhaps reflective of a willingness of ‘digital native’ generations to engage with AI not just as a tool, but as a news source or aggregator.

The research follows a recent TFD webinar on ethical AI storytelling, in which AI experts and journalists discussed how to communicate responsibly on AI topics, and how AI is being used.

AI survey results

Commenting on these results, the following AI experts said: 

  • Stephanie Forrest, CEO at TFD: “While generative AI has driven benefits and efficiencies, it's also fuelled misinformation and disinformation, presenting an enormous risk to democracy and society as a whole. Given how rife misinformation has become, the role of journalists has become more critical but also more challenging. It is clear that the media must communicate carefully to build trust. This involves being transparent about how they use and ensuring accurate reporting as well as scrutinising claims made about AI technology.”

  • Aled Lloyd Owen, Professor of Enterprise and Chief of Staff for Responsible AI UK: “The relationship between the general public and AI will boil down to two things: Regulation, and perceived trust in the technology and the providers of this technology. The onus on building this trust falls to both the media, the providers of generative AI tools, and the government, who are responsible for regulating this technology. Communicating how these tools are regulated in a way which encourages trust involves tailoring the message for specific stakeholders. In the case of media publishers, this means ensuring that AI tools are being used ethically, safely, and without diluting or influencing the output of a particular publisher or institution.”

  • Jasper Hamill, Freelance tech journalist and founder of tech publication Machine: “It comes as no surprise that the biggest technological leap forward in a generation has created a healthy scepticism around trust in the media. But this mistrust is something which should be encouraged. People should never blindly trust the views or reporting of news organisations, all of which are capable of making mistakes, despite the best of intentions. The reporting around AI should be cognizant of this mistrust and report on the technology in a way which helps non-technical people understand the huge change that’s on the horizon.” 

  • Susi O’Neill, Founder of EVA and expert on building trust in tech: “If the media want to work to combat this mistrust, then it is up to them to ensure transparency in their own use of AI tools, and work to change the existing narratives around AI. Organisations such as the New York Times have taken this approach, publicly disclosing exactly where AI is used, and crucially, not used. This approach will help naturally sceptical consumers of media to differentiate between the publishers using AI to supplement and support the work of journalists, and those using generative AI tools to replace journalists.” 

We use cookies to give you the best experience of using this website. By continuing to use this site, you accept our use of cookies. Please read our Cookie Policy for more information.