AI chatbots and online regulation – what you need to know
19 Dec 2025 09:44 AM
AI chatbots have been in the news a lot recently. They are a new and increasingly prevalent tool that some people use for work, study, research, entertainment or simply for conversation.
But as use of new chatbot technology increases, so does the potential for harm caused by them.
There have been reports about people using them to imitate real people – including people who have died – and to create content aimed at upsetting and harassing others.
Shockingly, there have been reports of cases where chatbots have encouraged people to harm themselves or even take their own life.
Here we explain how AI chatbots are covered by the UK’s Online Safety Act, and what providers must do to protect people who use them. A more detailed explanation is included in our previously published open letter to online services.
The types of services the Online Safety Act applies to
Under the UK’s online safety rules, providers of certain online services must assess and reduce the risk of harm to their users – especially children.
The rules apply to:
- User-to-user services – those that allow people to share images, videos, messages, comments or data with other people. Social media sites and apps are an example of this.
- Search services – those that enable users to search more than one website or database, including but not limited to traditional search engines.
- Services that publish pornographic images, video or audio online, including via chatbots - they must use highly effective age assurance to prevent children from accessing that content.
You can read more about the measures online services in scope of the Online Safety Act must take to protect all users from illegal content, and ensure children are protected from harmful content.
Where chatbots fall under the Online Safety Act
A chatbot that meets the Online Safety Act’s definitions of the types of service above – or is part of one of these types of services - is covered by the rules under the Online Safety Act.
Other AI generated content
Any AI-generated content shared by users on a user-to-user service is classed as user-generated content and would be regulated in the same way as content generated by humans. For example, a social media post that includes harmful imagery produced by AI, is regulated in the same way as similar content created by a human.
Ofcom’s role
As the UK’s regulator for online safety, our role is to protect people online by implementing and enforcing the rules Parliament passed in the Online Safety Act. We can take enforcement action – including issuing fines – if platforms fail to meet their duties.
Some chatbots or the content they produce are not covered by the Online Safety Act. For example, chatbots are not subject to regulation if they:
- Only allow people to interact with the chatbot itself and no other users;
- Do not search multiple websites or databases when giving responses to users; and
- Cannot generate pornographic content.
We can only take action on online harms covered by the Act, using the powers we have been granted. Any changes to these powers would be a matter for government and Parliament. We are supporting the UK Government as it considers possible changes.
Where can I find out more?
With technology constantly changing and new threats emerging, Ofcom keeps a close eye on how the use of AI is evolving. Our series of discussion papers explores the changing ways people are using AI, the online safety risks that may emerge and what can be done to tackle them. The discussion papers are available below: