Communicating AI Responsibly: How to navigate the hype, ethical considerations and human impact
‘Artificial Intelligence’ is a topic saturated with bold proclamations and sweeping predictions.
Of course, hype is par for the course with any disruptive technology, but when that technology has the potential to reshape everything from economies to identities, a degree of hyperbole feels inevitable.
Worryingly, these sweeping statements and exaggerations are having a damaging impact on trust in the UK. TFD recently commissioned research revealing that 53% of UK consumers trust the media less as a result of widespread GenAI use. But, even more surprising: over one third of the younger generation (18-24 year olds) actually trust the media more since the advent of mainstream AI use.
With half the population becoming more distrustful of media output, and younger generations far more willing to believe what they hear from AI outputs - despite widespread documentation of hallucinations and misinformation - we, as communicators, are faced with the urgent task of navigating increasingly polarised audiences and rethinking how credibility is earned and maintained.
At the same time, we even have to balance newfound skepticism and accusations of AI-use within our own content, copy and ideas.
To explore these challenges and discuss how to communicate AI responsibly, we brought together three communication experts: Aled Lloyd Owen, Professor of Enterprise and Chief of Staff at Responsible AI UK; Jasper Hamill, freelance tech journalist and founder of the publication Machine; and Susi O'Neill, author and Executive Director at EVA: Responsible AI + Digital Trust.
Throughout the discussion, they tackled some of the most urgent issues around AI ethics, communication, and trust - offering practical insights for businesses and communicators looking to foster open and honest conversations about AI.
Want to watch the full recording? See here

The Hype vs. Reality Gap
The conversation started with an overview of how companies and communicators can responsibly frame the future of AI without falling into the trap of hype.
Susi O'Neill warned against the industry-wide misconception that "everybody wants AI and they want to see that you're doing it," driven by investor expectations or a desire to appear innovative. She argued that this represents a fundamental misunderstanding of marketing, and drew attention to the fact that public perception of AI is not universally positive, with audiences moving further away from the sentiments of the "typical Silicon Valley tech bro" attitudes which have dominated some industries. This underscores the need for tailored communication strategies that cater to diverse audience perspectives.
Interestingly, veteran journalist, Jasper Hamill, offered a more nuanced perspective. He argues that the potential of AI, such as the "creation of a new silicon super species," is inherently sensational and deserves to be communicated in engaging language. However, Hamill also stressed the critical need to avoid inaccuracy, which is inherent to Large Language Model outputs.
All panellists agreed that the spread of fear for the sake of “clicks” creates a harmful environment for all parties: causing genuine anxiety and backlash. This highlights the ethical responsibility of communicators to temper excitement with factual accuracy and avoid gratuitous fear-mongering.
Building Trust and Transparency
The panelists collectively emphasised the need for transparency regarding AI usage. Aled Lloyd Owen underscored the foundational role of trust in communication strategies. He argued it is vital for organisations to bridge the gap between the technical complexities of AI and the public's understanding and expectations. This involves clear communication that resonates with different stakeholders, assuring them of responsible AI practices.
Susi O'Neill echoed this sentiment, highlighting that AI skepticism is still disproportionately higher amongst certain demographic groups, making transparent communication even more critical to addressing diverse audiences. This is why it’s important for organisations to have a public-facing AI policy statement that addresses usage, and the environmental and social impact of AI models.
What’s more, a strong commitment to transparency and openness also fosters confidence amongst stakeholders, and lays a foundation for long-term trust.
Why Maintaining a ‘Human-Centric’ Approach is Vital
A recurring theme throughout the webinar was the importance of viewing AI as a tool to enhance human capabilities rather than a replacement for human expertise, creativity, and connection.
Jasper Hamill, while acknowledging the potential for AI to automate certain tasks, stressed the importance of human oversight and creativity. He shared his experience of using AI as a "collaborator" in his work, but emphasised the need to maintain "human collaboration." Hamill’s perspective highlights the potential for AI to assist with research, organisation, and even generating initial drafts, but ultimately, the human professional is needed to inject voice, tone, and critical thinking.
Susi pointed out that replacing human roles altogether often leads to a decline in quality. Instead “it’s about how humans adapt and work with AI tools, not about the output from the AI tools itself.”
Aled Lloyd Owen echoed this sentiment, emphasising the need for a "maturing of the overall perception of what AI can do." He argued that the real efficiency gains comes from an experienced comms professional who is able to use these tools to ask them the right question, to set them going in the right direction, to produce things more quickly or in a more rounded way."
In summary, what are the key takeaways for business leaders and communication professionals?
- Develop clear AI policies: Create public-facing statements explaining how your organisation uses AI, similar to privacy policies.
- Communicate benefits, not features: When announcing AI initiatives, focus on how they improve outcomes for users, with real-life examples, rather than simply stating "we use AI," or making sweeping, exaggerated statements about its capabilities or potential.
- Establish AI guardrails: Create frameworks to ensure everyone understands potential harms, risks and ethical biases when using AI tools, and implement measures to address them as effectively as possible.
- Maintain human oversight: Keep humans involved in reviewing AI outputs to prevent bias and ensure quality, and label any public facing AI-generated content, particularly images.
- Invest in training: Ensure employees receive adequate AI training, focussing on practical skills like effective prompting.
But what did AI make of the discussion? We asked ChatGPT, Claude, and Gemini for their takes, and the disparities were notable. For Gemini and Claude, “hype versus reality” and “Human-AI Collaboration” were drawn out as key points, while ChatGPT opted for a broad, more generic overview. In our upcoming AI Collective blog, we’ll share our learnings from engaging with three different AI platforms and the lessons comms professionals can apply when leveraging AI platforms. Stay tuned!
For a free 30-minute consultation on how TFD can advise on, or deliver, a responsible communications programme, please do get in touch!