A recent report reveals that a disturbing rise of AI-Generated child exploitation images online is creating new challenges for investigators. The surge in AI technology has unleashed an “avalanche” of life-like images and videos that depict child sexual exploitation, sparking concerns among experts dedicated to child protection.
The report was published by The Washington Post earlier this week, bringing attention to the discovery of numerous AI-generated images featuring child exploitation, mostly found on various dark web forums.
The report also noted an abundance of ‘How To’ advice on generating realistic AI images depicting children engaged in sexual acts.
Rebecca Portnoff, the director of data science at Thorn, a nonprofit organization focused on child safety, shared with The Washington Post, “Children’s images, including those of known victims, are being repurposed for this abhorrent purpose.”
Over the past few months, Thorn has observed a consistent growth in the prevalence of AI-generated images on the dark web.
The struggle to identify real-life victims from fake
The surge in these images poses a significant challenge to the identification of victims and the fight against actual abuse, as law enforcement agencies now have to invest extra effort in determining the authenticity of photographs.
The report explains that the abundance of AI-generated child exploitation images may “confound” the central tracking system that blocks such material online. This system mainly focuses on identifying known instances of abuse rather than detecting newly generated content.
As a result, law enforcement officials tasked with identifying victimized children may now find themselves compelled to spend valuable time distinguishing between real images and those generated by AI.
The debate over the legalities
The proliferation of such images has ignited a debate regarding whether they violate federal laws protecting children, given that the depicted individuals often do not exist in reality. Justice Department officials responsible for combating child exploitation assert that these images are illegal, even when the child portrayed is artificially generated, reports The Washington Post.
Nevertheless, there have been no reported cases in the United States where individuals have faced charges specifically for creating deepfake child pornography. However, in a recent ruling, a man from Quebec, Canada, was sentenced to three years in prison for utilizing AI to generate child pornography images, setting a new legal precedent in the country, Petapixel reported.
Open-source AI generators
It’s likely, says experts, that the platforms most often used to create these images and deep fakes are open source and un-policed platforms such as Stable Diffusion. According to The Washington Post, Stability AI, who runs Stable Diffusion, has stated that “it bans the creation of child sex-abuse images, assists law enforcement investigations into “illegal or malicious” uses and has removed explicit material from its training data, reducing the ability for bad actors to generate obscene content.”
Protecting minors from exploitation and trafficking via new technologies has recently been at the forefront of several investigations. Popular social media sites Facebook and Instagram are hotspots for grooming and sharing underage sexually explicit content. Essentially, the owners of these platforms and now the AI imaging generators are not doing enough to counter this.
[Via Petapixel]
FIND THIS INTERESTING? SHARE IT WITH YOUR FRIENDS!