THE UK will introduce laws to ban AI tools used to generate sexual abuse images of children, becoming the first country to do so, the government announced on Saturday.
Home Secretary Yvette Cooper said it would be illegal to create, possess, or distribute AI tools designed for such content, with offenders facing up to five years in prison.
It will also be a crime to possess AI-generated "paedophile manuals" that provide instructions on using AI for child abuse, carrying a penalty of up to three years in prison.
“This is a really disturbing phenomenon. Online child sexual abuse material is growing, but also the grooming of children and teenagers online. And what's now happening is that AI is putting this on steroids," Cooper told Sky News on Sunday.
She said AI tools were making it easier for perpetrators "to groom children, and it's also meaning that they are manipulating images of children and then using them to draw and to blackmail young people into further abuse.
"It's just the most vile of crimes," she added.
The new law will also target certain AI models used for child abuse, Cooper said, adding, "Other countries are not yet doing this, but I hope everyone else will follow."
The government said AI tools were being used to generate abuse images by “nudeifying” real-life photos of children or by "stitching the faces of other children onto existing images."
The law will also criminalise those who run websites that enable paedophiles to share child abuse content or provide advice on grooming children, with offenders facing up to ten years in prison.
Cooper told the BBC that a recent inquiry found around 500,000 children in the UK experience some form of abuse each year, with online abuse becoming a growing concern.
The measures will be part of the Crime and Policing Bill when it is introduced in parliament.
The Internet Watch Foundation (IWF) has reported a rise in AI-generated child abuse images.
In a 30-day period in 2024, IWF analysts identified 3,512 such images on a single dark web site.
The number of the most serious category of abuse images also increased by 10 per cent in a year, it found.
(With inputs from AFP)
The FBU is planning to introduce new internal policies and wants the TUC to take action as well. (Representational image: iStock)
FBU chief raises concern over rise in racist online posts by union members
THE FIRE Brigades Union (FBU) and other trade unions are increasingly concerned about a rise in racist and bigoted online comments by their own members and officials, according to Steve Wright, the FBU’s new general secretary, speaking to the Guardian.
Wright said internal inquiries have revealed dozens of cases involving members using racist slurs or stereotypes, often aimed at asylum seekers.
He said similar issues were reported in other unions, prompting a joint campaign to counter false narratives around immigration and race promoted by far-right groups online.
“People with far-right views are becoming more brazen in what they do on social media, and I’ve witnessed it with my own union around disciplinary cases and the rhetoric of some of our own members,” Wright said to the newspaper.
He added, “Some of our members and sometimes our reps have openly made comments which are racist and bigoted. In my time in the fire service, that has gone up.”
The FBU is planning to introduce new internal policies and wants the TUC to take action as well. A formal statement addressing far-right narratives will be launched at the union’s annual conference in Blackpool next month.
Wright cited the influence of social media and figures like Donald Trump and Nigel Farage as factors contributing to these incidents. “It feels like an itch that we’ve got to scratch,” he said.
The FBU barred a former official last year for allegedly endorsing racist content on X, including posts from Britain First and Tommy Robinson.
Wright also warned that the union could strike if the government moves to cut frontline fire services.