Unicef - ‘Deepfake abuse is abuse’
5 Feb 2026 12:41 PM
“UNICEF is increasingly alarmed by reports of a rapid rise in the volume of AI-generated sexualised images circulating, including cases where photographs of children have been manipulated and sexualised.
“Deepfakes – images, videos, or audio generated or manipulated with Artificial Intelligence (AI) designed to look real – are increasingly being used to produce sexualised content involving children, including through “nudification,” where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualised images.
“New evidence confirms the scale of this fast-growing threat: In a UNICEF, ECPAT and INTERPOL study* across 11 countries, at least 1.2 million children disclosed having had their images manipulated into sexually explicit deepfakes in the past year. In some countries, this represents 1 in 25 children – the equivalent of one child in a typical classroom.
“Children themselves are deeply aware of this risk. In some of the study countries, up to two thirds of children said they worry that AI could be used to create fake sexual images or videos. Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention, and protection measures.
“We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse material (CSAM). Deepfake abuse is abuse, and there is nothing fake about the harm it causes.
“When a child’s image or identity is used, that child is directly victimised. Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help.
“UNICEF strongly welcomes the efforts of those AI developers that are implementing safety-by-design approaches and robust guardrails to prevent misuse of their systems. However, the landscape remains uneven, and too many AI models are not being developed with adequate safeguards. The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly.
“UNICEF urgently calls for the following actions to confront the escalating threat of AI-generated child sexual abuse material:
- All governments expand definitions of child sexual abuse material (CSAM) to include AI-generated content, and criminalise its creation, procurement, possession and distribution.
- AI developers implement safety-by-design approaches and robust guardrails to prevent misuse of AI models.
- Digital companies prevent the circulation of AI-generated child sexual abuse material – not merely remove it after the abuse has occurred; and to strengthen content moderation with investment in detection technologies, so such material can be removed immediately – not days after a report by a victim or their representative.
“The harm from deepfake abuse is real and urgent. Children cannot wait for the law to catch up.”
For more information, please contact:
UNICEF UK Media team at media@unicef.org.uk or 0208 375 6030.
About UNICEF
UNICEF works in some of the world’s toughest places, to reach the world’s most disadvantaged children. Across more than 190 countries and territories, we work for every child, everywhere, to build a better world for everyone.
The UK Committee for UNICEF (UNICEF UK) raises funds for UNICEF’s emergency and development work for children. We also promote and protect children’s rights in the UK and internationally. We are a UK charity, entirely funded by supporters.
United Kingdom Committee for UNICEF (UNICEF UK), Registered Charity No. 1072612 (England & Wales), SC043677 (Scotland).
For more information visit unicef.org.uk.
Follow UNICEF UK on Instagram, LinkedIn, Facebook and YouTube.