TLDR:
- Sexual predators are using AI image generators to exploit children by sharing thousands of AI-generated images of child sexual abuse on dark web forums.
- Current child sexual abuse laws are not equipped to handle the unique dangers posed by AI and other emerging technologies.
Lawmakers must take action to put legal protections in place.
AI platforms are “trained” on existing visual material, including real children’s faces taken from social media and photographs of real-life exploitation.
AI-generated images of child abuse are increasingly difficult to distinguish from unaltered photographs.
Text-to-image software can easily create images of child abuse based on the perpetrator’s preferences.
AI tools like ChatGPT can be used by adults to lure children online by posing as someone their own age.
We need to update the federal legal definition of child sexual abuse material to include AI-generated depictions and require tech companies to continuously monitor and report exploitative material.
Employees of social media and tech companies should have legally mandated reporting responsibilities.
We need to rethink how we use end-to-end encryption to prevent the storage and sharing of child abuse images.
If lawmakers act now, we can prevent widespread harm to children.
Teresa Huizar is the CEO of National Children’s Alliance, America’s largest network of care centers for child abuse victims.