Supporting and harnessing AI innovation safely

6 Jun 2025 01:35 PM

Ofcom has set out how we are supporting the safe innovation and use of artificial intelligence across the sectors we regulate, and in streamlining the way we work. 

Natalie Black, Ofcom’s Group Director for Networks and Communications, said: “AI is transforming the sectors we regulate – from automated online moderation to smarter telecoms networks and more accessible broadcasting.

“Ofcom is enabling this to flourish safely by creating the right conditions for innovation, while staying ahead of the risks to protect consumers. Our approach is clear: support growth, safeguard the public, and lead by example in how we use AI ourselves.”

Smarter communications

The industries we regulate have technology and innovation at their heart. As technologies evolve, new opportunities emerge that have the potential to drive better outcomes for consumers and businesses. For example:

In general, our regulation is technology-neutral, which means regulated companies are essentially free to deploy AI as they see fit, without needing our permission, helping to enable faster innovation and growth.

That said, while AI affords new opportunities and benefits for businesses and consumers, it is important for Ofcom to stay ahead of any associated risks, and take action to mitigate them.

Supporting innovation

Encouraging and promoting economic growth is built into Ofcom’s duties, and we are working on a range of initiatives to support AI innovation to help achieve this. These include:

Mitigating risks

While both industry and consumers benefit from AI deployment, the risks created or exacerbated by AI primarily flow to the consumer.

These risks can cause serious harm to individuals, especially in our online lives. For example, two in five UK internet users aged 16+ say they have seen a deepfake – among those, one in seven say they have seen a sexual deepfake.[2]

Of those who say they have seen a sexual deepfake, 15% say it was of someone they know, 6% say it depicted themselves, and 17% thought it depicted someone under the age of 18.

To tackle deepfakes and a range of other serious online harms, we are implementing – and starting to enforce – the UK’s Online Safety Act. Our ‘safety by design’ rules – which mean platforms should take down illegal content created by AI, and assess the risks of any changes they make to their services – will help create a safer life online for all UK users, especially children, while at the same time ensuring that tech firms have flexibility and freedom to innovate.

How Ofcom is using AI

We are harnessing AI to reduce the burden on all organisations and individuals we regulate or engage with. We have more than 100 technology experts – including around 60 AI experts – in our data and technology teams, including many with direct experience of developing AI tools.

We are carrying out over a dozen trials in our own use of AI, aimed at increasing our productivity, improving our processes and generating efficiencies. These include using everyday third-party GenAI applications as well as creating AI based applications in-house. For example:

Over the next year, we plan to accelerate the use of AI across our policy areas as appropriate, adopting a safety-first approach. In practice, this means continuing to trial AI tools and only rolling them out across the organisation once we are confident they are safe and secure.[3]

Notes to editors:

  1. Agentic AI refers to a type of artificial intelligence where systems can operate autonomously, making decisions and performing tasks without constant human intervention.
  2. Ofcom deepfakes research, July 2024
  3. This report provides examples of the AI work that we plan to carry out over the next 12 months in each of our policy areas as well as work that cuts across our policy areas.