AI learns from kids’ photos despite strict parental privacy settings.


TLDR:

  • AI models are being trained on kids’ photos posted online, even with strict privacy settings.
  • Children’s images are being used without their knowledge or consent, potentially leading to privacy and safety risks.

Human Rights Watch has discovered that photos of children, including indigenous children, from Brazil and Australia are being used in AI datasets without their knowledge or consent. This poses privacy and safety risks to the children, as identifying information such as names and locations are sometimes revealed along with the images. Despite efforts to remove these images, AI models have already trained on them, leading to concerns about the creation of harmful deepfake content.

Parents may not realize the extent of the risks posed to their children’s privacy and safety online, as even images posted with strict privacy settings or unlisted videos on platforms like YouTube can be scraped and used to train AI models. The AI training on children’s images poses unique harms, especially for indigenous children, and removing the images from datasets does not prevent their use in other AI models.

It is suggested that legal protections be put in place to safeguard children’s images online rather than expecting parents to remove them, as this could have long-lasting consequences for the children’s privacy and safety. With upcoming reforms in Australia, there is hope for better protection for children’s online privacy, but more action is needed to prevent the misuse of children’s data in AI training.