Ofcom
Printable version

Open letter to UK online service providers regarding Generative AI and chatbots

 

Today we've published an open letter to online service providers operating in the UK about how the UK’s Online Safety Act will apply to Generative AI and chatbots.

Here is the letter in full:

To online service providers operating in the United Kingdom,

In recent weeks and months, we have seen multiple incidents of online harm that have involved the use of Generative AI, by which we mean AI models that can create text, images, audio and videos in response to a user prompt. These include the tragic death of an American teenager who had developed a relationship with a chatbot based on a Game of Thrones character, and last week, we were alerted to a particularly concerning case, where users of a Generative AI chatbot platform had created chatbots to act as ‘virtual clones’ of real people and deceased children, including Molly Russell and Brianna Ghey.

These distressing incidents have raised questions about how the UK’s Online Safety Act will apply to Generative AI.

We would like to take this opportunity to remind you of what is regulated under the Act and how it applies to Generative AI chatbot tools and platforms. This includes:

  • Sites or apps that allow their users to interact with each other by sharing images, videos, messages, comments or data with other users of the platform are ‘user-to-user services’, in the language of the Act.[1] Where a site or app includes a Generative AI chatbot that enables users to share text, images or videos generated by the chatbot with other users, it will be a user-to-user service. This includes, for example, services with ‘group chat’ functionality that enables multiple users to interact with a chatbot at the same time – whether this chatbot functionality is the main feature of the service, or is just part of a bigger service such as a social media platform.
  • Where a site or app allows users to upload or create their own Generative AI chatbots – ‘user chatbots’ – which are also made available to other users, it is also a user-to-user service. This includes services that provide tools for users to create chatbots that mimic the personas of real and fictional people, which can be submitted to a chatbot library for others to interact with. Any text, images or videos created by these ‘user chatbots’ is ‘user-generated content’ and is regulated by the Act.[2]
    • Indeed, any AI-generated text, audio, images or videos that are shared by users on a user-to-user service is user-generated content and would be regulated in exactly the same way as human-generated content. For example, deepfake fraud material is regulated no differently to human-generated fraud material. It does not matter whether that content was created on the platform where it is shared or has been uploaded by a user from elsewhere.

The Act also regulates Generative AI tools and content in other ways, including:

  • Generative AI tools that enable the search of more than one website and/or database are ‘search services’ within the meaning of the Act.[3] This includes tools that modify, augment or facilitate the delivery of search results on an existing search engine, or which provide ‘live’ internet results to users on a standalone platform. For example, in response to a user query about health information, a standalone Generative AI tool might serve up live results drawn from health advice websites and patient chat forums – this would make it a search service regulated by the Act.
  • Sites and apps that include Generative AI tools that can generate pornographic material are also regulated under the Act.[4] These services are required to use highly effective age assurance to ensure children cannot normally access pornographic material.

Where the above scenarios apply to your service, we would strongly encourage you to prepare now to comply with the relevant duties. For providers of user-to-user services and search services, this means, among other requirements, undertaking risk assessments to understand the risk of users encountering harmful content; implementing proportionate measures to mitigate and manage those risks; and enabling users to easily report illegal posts and material that is harmful to children. The first duties will begin to take effect from December of this year at the point we publish our final Illegal Harms Risk Assessment Guidance and Codes of Practice.

Many of the measures in our draft Codes of Practice will help user-to-user and search services to meet these duties and protect their users from risks posed by Generative AI. These include:

  • having a named person accountable for compliance with the Online Safety Act;
  • having a content moderation function that allows for the swift takedown of illegal posts where identified and for children to be protected from material that is harmful to them;
  • having a content moderation function that is adequately resourced and well trained;
  • using highly effective age assurance to prevent children from encountering the most harmful types of content where this is allowed on the platform; and
  • having easy to access and easy to use reporting and complaints processes.

The duties set out in the Act are mandatory. If companies fail to meet them, Ofcom is prepared to take enforcement action, which may include issuing fines. The first major milestone for sites and apps falling under Part 3 of the Act is to complete their Illegal Harms Risk Assessment, which they will need to do by mid-March 2025. For pornography sites falling under Part 5, we expect the government to commence the Part of the Act relevant to these providers around the time of Ofcom publishing its Part 5 Guidance in January 2025, at which point their age assurance duties will be enforceable. Ofcom’s website contains more information about the duties of regulated platforms and relevant deadlines.

Our supervision team is on hand to support you with any queries that relate to Generative AI and what you can do to remain compliant with the Act. You can submit enquiries to us online here, and check if the Online Safety Act applies to you by using our ‘regulation checker’ tool here. While the duties are not yet live, there is no reason why you cannot take immediate steps today to lay the groundwork for compliance and to protect your users from any risks they may already face.

Lindsey Fussell, Interim Group Director for Online Safety

NOTES:

  1. These types of platform are in scope of duties in Part 3 of the Act.
  2. User-generated content includes images, videos, messages or comments, as well as other forms of data, that are generated, uploaded or shared by the user and can be encountered by other users.
  3. Search services have search engine functionalities that enable a person to search more than one website or database – or potentially all of them. These types of platforms are also in scope of duties in Part 3 of the Act.
  4. Specifically, they would fall in scope of the duties set out in Part 5 of the Act. These duties apply to service providers that display or publish pornography in image, video or audio form on their platforms.

 

Channel website: https://www.ofcom.org.uk/

Original article link: https://www.ofcom.org.uk/online-safety/illegal-and-harmful-content/open-letter-to-uk-online-service-providers-regarding-generative-ai-and-chatbots/

Share this article

Latest News from
Ofcom

Webinar: Telford and Wrekin Council’s Award-Winning AI Customer Service Journey