Experts warn: Kids’ explicit pictures found in AI database, tip of iceberg.

A new Stanford University study has found that an AI image training database contained over 1,000 illegal images depicting child sexual abuse. The database, created by non-profit LAION, was used to train popular image-generation tools. The images aid in producing realistic, explicit images of children, even if they don’t depict a specific person. Experts blame a lack of accountability and regulation in the AI space for allowing illegal content to be included in training data. This is not the first case of child sexual exploitation through AI, and experts warn that the issue could get worse in the future.

The researchers at Stanford used a technology called PhotoDNA to find individual, illegal images in the database. While they only scanned the LAION database for this report, it is possible that explicit images of children exist in other public databases as well. The report highlights the need for accountability and regulation in the AI industry. Consumers are becoming less forgiving of companies scraping the internet for training data and are shifting towards using licensed content for training purposes.