Guidelines on how to deal with AI-generated child sexual abuse material (CSAM) have been issued to 38,000 teachers and staff across the UK.
The guidelines are an attempt to help people working with children tackle the “highly disturbing” rise in AI-generated CSAM.
They have been issued by the National Crime Agency (NCA) and the Internet Watch Foundation (IWF).
The AI-generated content is illegal in the UK and is treated the same as any other sexual abuse imagery of children, even if the imagery isn’t photorealistic.
“The rise in AI-generated child sexual abuse imagery is highly disturbing and it is vital that every arm of society keeps up with the latest online threats,” said safeguarding minister Jess Phillips.
“AI-generated child sexual abuse is illegal and we know that sick predators’ activities online often lead to them carrying out the most horrific abuse in person.
“We will not allow technology to be weaponised against children and we will not hesitate to go further to protect our children online,” she said.
The guidelines suggest that if young people are using AI to create nude images from each other’s pictures – known as nudifying – or creating AI-generating CSAM, they may not be aware that what they’re doing is illegal.
Nudifying is when a non-explicit picture of someone is edited to make them appear nude and is increasingly common in “sextortion” cases – when someone is blackmailed with explicit pictures.
“Where an under-18 is creating AI-CSAM, they may think it is ‘just a joke’ or ‘banter’ or do so with the intention of blackmailing or harming another child,” suggests the guidance.
“They may or may not recognise the illegality or the serious, lasting impact their actions can have on the victim.”
Last year, the NCA surveyed teachers and found that over a quarter weren’t aware AI-generated CSAM was illegal, and most weren’t sure their students were aware either.
More than half of the respondents said guidance was their most…

