Lawmakers aim to shield intimate AI deepfakes from Section 230.

TLDR:

  • Bipartisan lawmakers propose Intimate Privacy Protection Act to remove Section 230 protection for tech companies failing to address intimate AI deepfakes
  • The bill creates a duty of care for platforms to prevent cyberstalking, intimate privacy violations, and digital forgeries, including AI deepfakes

Key Elements of the Article:

Lawmakers Jake Auchincloss and Ashley Hinson have introduced the Intimate Privacy Protection Act, aiming to hold tech companies accountable for removing intimate AI deepfakes from their platforms. This bill seeks to amend Section 230 of the Communications Act of 1934, which currently provides immunity to online platforms for user-generated content. The proposed legislation includes a duty of care for platforms to have a “reasonable process” in place to address cyberstalking, intimate privacy violations, and digital forgeries, like AI deepfakes that are virtually indistinguishable from authentic records.

The bill requires platforms to implement measures to prevent these harms, establish clear reporting methods, and remove offending content within 24 hours. Lawmakers argue that tech companies should not use Section 230 as a shield to avoid responsibility for the spread of malicious deepfakes and privacy violations. Combatting intimate AI deepfakes has gained traction in AI policy discussions, with recent legislative actions at both federal and state levels. Microsoft has also called for regulatory measures to address AI-generated deepfakes.

The Intimate Privacy Protection Act’s inclusion of a duty of care mirrors similar provisions in the Kids Online Safety Act, indicating a trend towards creating new protections online. While bipartisan efforts to modify Section 230 have faced challenges in the past, this bill could garner support given its focus on combating intimate AI deepfakes and privacy violations. Stay tuned for further developments as the legislation progresses through the legislative process.