New law could help tackle AI-generated child abuse at source, says wat


0


Groups tackling AI-generated child sexual abuse material could be given more powers to protect children online under a proposed new law.

Organisations like the Internet Watch Foundation (IWF), as well as AI developers themselves, will be able to test the ability of AI models to create such content without breaking the law.

That would mean they could tackle the problem at the source, rather than having to wait for illegal content to appear before they deal with it, according to Kerry Smith, chief executive of the IWF.

The IWF deals with child abuse images online, removing hundreds of thousands every year.

Ms Smith called the proposed law a “vital step to make sure AI products are safe before they are released”.

Image:
An IWF analyst at work. Pic: IWF

How would the law work?

The changes are due to be tabled today as an amendment to the Crime and Policing Bill.

The government said designated bodies could include AI developers and child protection organisations, and it will bring in a group of experts to ensure testing is carried out “safely and securely”.

The new rules would also mean AI models can be checked to make sure they don’t produce extreme pornography or non-consensual intimate images.

“These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk,” said Technology Secretary Liz Kendall.

“By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.”

Please use Chrome browser for a more accessible video player

AI child abuse image-maker jailed

AI abuse…


Like it? Share with your friends!

0

What's Your Reaction?

hate hate
0
hate
confused confused
0
confused
fail fail
0
fail
fun fun
0
fun
geeky geeky
0
geeky
love love
0
love
lol lol
0
lol
omg omg
0
omg
win win
0
win
khbrknews.com