Are we leaving small businesses behind in the cybersecurity arms race?
Cybersecurity affects us all – nations, businesses, and individuals. AI threatens to change the nature of the digital threats we face as a society, from the scams that are taking over our inboxes and threatening to drain our bank accounts, to providing new vectors for nation-state hackers to disrupt our lives.
For businesses, AI represents both an opportunity and a threat. Hackers will use AI tools to create new malware or launch more sophisticated phishing attacks to gain entry to business systems. But the tools also mean IT departments can be better armed to protect themselves. However, for third and public sector organisations with fewer resources at their disposal, the threats posed by AI-driven attacks become particularly acute.
In conversation with Stephanie Itimi, Director of Information Protection and Compliance at Age UK, Chair of the NCSC's Charity Trust Group, and Founder & Board Chair of Seidea CIC, a career development platform designed to connect Black, Asian and Minority Ethnic talent with inclusive employers in the cybersecurity field, we talk about the things that keep her awake at night.
During her career, Stephanie played a pivotal role with the BBC World Service team in launching the first public health information service on WhatsApp, which combated Ebola disinformation in West Africa and reached over 16,000 users, and contributed to the World Economic Forum Cybersecurity Outlook 2025 Report. This background has given her extensive insight into the global cybersecurity landscape, and she shared her thoughts on how the next decade may play out when it comes to cybersecurity, and how the good guys can ensure that their tools are better than the bad guys.
In the last few years, a disturbing new scam has emerged that threatens both businesses and individuals. Voice cloning – where a small three second clip of a person’s voice is stolen from social media and cloned to make loved ones or colleagues believe they are talking to someone they know.
The CEO of WPP, the world’s biggest advertising agency, had his voice cloned to solicit money and private company information from colleagues last year. Meanwhile, in a horrific case in the US, a mother believed she was listening to her teenage daughter begging for her life in a kidnapping attempt, and nearly handed over $50,000 before she realised it was an AI scam.
For Itimi, voice cloning is what most worries her when it comes to the future of cybersecurity.
“In the next decade, we’ll see an increasing blurring of the lines between what is real and what isn’t. While deepfakes are already a significant concern, I believe the most frightening development will be voice cloning. It’s the hardest to detect and, as a result, poses an even greater risk to personal security and trust.”
Itimi believes it will usher in a new relationship between individuals and social media because everything we put on it, from videos to pictures, could be ammunition for criminals in the age of AI.
“I’ve used ChatGPT, and it’s been able to pull up virtually every piece of information about me available online. Everything from my social media profiles to public records. Now, imagine how easily that wealth of personal data could be exploited in a social engineering attack, where criminals use it to manipulate or deceive individuals into divulging even more sensitive information?”
It means consumers really need to start protecting their data, whether that be by using a data broker or just getting more savvy about things they can do themselves.
“For example, on ChatGPT, you can turn off your memory to stop your data from being entered into the system.”
She also advised that people check the number if a loved one calls you asking for money or claiming to be in distress. Other security experts go even further, suggesting we all need a safe word for phone calls with family, friends, or even with our bosses.
Attacks every 14 seconds
Worldwide cybercrime costs are estimated to hit $10.5 trillion annually by 2025, and that isn’t a figure that is likely to go down in the next decade.
Itimi is concerned about how small businesses and charities, which often lack the resources of larger organisations, will afford the increasingly sophisticated defences they need.
She thinks that inevitably, cybersecurity is going to become an even greater arms race between criminals and defenders. And one that hinges on who wields the most powerful AI tools.
For criminals and scammers, AI will mean that they can scale up their attacks. A GenAI program can write a thousand convincing, personalised phishing emails in seconds, for instance. It is estimated that there is an attack on a business every 14 seconds.
But it isn’t entirely bleak. AI tools can also be utilised by businesses to test the robustness of their systems and improve their security, said Itimi.
“In the past, it would have cost a lot of money to replicate cybersecurity attacks. AI now allows us to do twice as much for a fraction of the price, and you can be a bit more experimental with the scenarios that you are testing out, too,” she told TFD.
Human error is often a company’s biggest weakness, as many attacks begin when an employee unwittingly clicks on a phishing email. As AI makes these scams even more convincing, businesses will need to devote a portion of their security budget to training their staff, teaching them how to recognise and avoid increasingly sophisticated threats over the next decade.
It may deter malicious actors, but what happens to the businesses or individuals whose data is leaked or sold? If you block one thing, malicious actors will always find another way.
Need for DEI
The use of shadow AI, where employees are using tools that haven’t been endorsed by the company, will also be cracked down on, she thinks.
She has three key recommendations for businesses that may not have huge budgets but want to future-proof themselves.
“Start with the Cyber Essentials, but if you are a small business and you can’t even afford that, at least you should be focusing on the NIST (National Institute of Standards and Technology) frameworks and have a roadmap for the next five or 10 years. And when it comes to AI, you want to start having those conversations within your organisation and have an AI policy,” she said.
Cyber Essentials is a UK scheme, but its common-sense advice applies globally.
Much of Itimi’s work is focused on diversity, which remains a problem within cybersecurity and the wider tech industry.
The UK government estimates that 30% of cyber firms faced a skills shortage in 2024, while the World Economic Forum states that worldwide, there is a shortage of four million cybersecurity workers.
Itimi finds Meta and Amazon’s decision to pull back on diversity programmes extremely concerning as we seek more cyber experts in the coming decade.
“I worry that other companies will follow suit and we might see a regression in all aspects, whether it be gender, neurodiversity, or ethnicity.”
She warned that DEI should be viewed as a strategic priority rather than just a marketing initiative.
“What does that signal to those in communities that don’t normally have access to those industries? Because even in cybersecurity, we live in a global world, the attacks are coming from global sources, and we need a global workforce to be able to understand the minds of hackers who might come from anywhere in the world.”
“It is a necessity in the world that we live in,” she said.
Ransomware regulation?
Another key discussion point on the global cybersecurity agenda is the surging threat of ransomware, and whether government intervention can stem the tidal wave of attacks.
According to security firm Veeam, the average organisation will experience around two ransomware attacks each year. Such attacks, where hackers gain entry to a company’s system, freeze data and demand a ransom to unlock it, also come with a threat to release sensitive data on the dark web.
Tragically, a huge 84% of businesses opt to pay the ransom according to Cybereason,
exposing them to further risk. 78% of those who paid the fine were subject to a second attack, while, according to Veeam’s annual threat report, nearly a third of those who paid were not able to recover their data.
The UK government is currently considering instigating a ban on paying ransoms, but it is not a policy Itimi thinks can work.
“It may deter malicious actors, but what happens to the businesses or individuals whose data is leaked or sold? If you block one thing, malicious actors will always find another way.”
While ransomware will remain a common way for hackers to attack businesses, much of the future of cyber may be fought out at the national level.
Itimi predicts a rise in infrastructure attacks from nation-state hackers looking to cause societal disruption.
“Targeting critical infrastructure can have far-reaching consequences, disrupting entire economies. Without energy, how do you power essential services, turn on the lights, or even heat homes? The impact goes beyond inconvenience; it can bring daily life to a standstill.”
Attacks such as those as the Colonial Oil pipeline, or the breach of the SCADA systems of the Bowman Dam in New York, could become regular occurrences in the next decade, she thinks.
As cybercrime evolves beyond financial gain into a form of cyber warfare, she thinks it is essential for businesses and governments to collaborate in strengthening their defences and preparing for future threats.
Ready to elevate your cybersecurity PR? Explore our services: https://www.wearetfd.com/services/cybersecurity-pr