Meta banned from using Brazilian data for AI training.

TLDR:

Brazil’s national data protection authority has ordered Meta to stop using Brazilian data to train its AI models. The authority gave Meta five days to change its policy, or face fines. This decision comes after a report found identifiable images of Brazilian children in a dataset used to train image models.

Meta is disappointed with the decision, calling it a step backward for innovation. The company says it is more transparent than others in the industry regarding AI training. Brazil’s decision follows similar pushback in Europe, where Meta delayed the launch of AI services and paused training on EU and U.K. data.

In the U.S., where there are no federal online privacy protections, Meta plans to continue training its AI models. This is not the first time Meta has clashed with Brazilian authorities, as the company was previously barred from using its name due to confusion with another company.

Key Points:

  • Brazil orders Meta to stop using Brazilian data for AI training
  • Meta faces fine if it does not comply with the order
  • Report finds identifiable images of Brazilian children in dataset used for AI training
  • Meta disagrees with decision and views it as a setback for innovation

Brazil’s decision to halt Meta’s use of Brazilian data for AI training reflects growing concerns about data privacy and protection, especially when it comes to sensitive information such as images of children. It also highlights the challenges that tech companies like Meta face in complying with diverse privacy regulations around the world, and the potential impact on their operations and innovation.