State probe launched over fake Biden robocall, raising AI election concerns.

TLDR:

  • The New Hampshire attorney general’s office is investigating a fake robocall impersonating President Joe Biden that discouraged residents from voting in the presidential primary.
  • The use of AI in elections is becoming more prevalent and raises concerns about the spread of false information.
  • Democratic U.S. Sen. Edward Markey introduced the BIAS Act, which aims to establish safeguards against AI bias and discrimination in the federal government.
  • Meta and Google are developing policies to disclose the use of generative AI in political ads, while Google also requires election advertisers to disclose digitally altered or synthetic content.

The New Hampshire attorney general’s office has launched an investigation into a fake robocall that impersonated President Joe Biden and discouraged residents from voting in the presidential primary. The robocall was delivered on Sunday and falsely showed that it had been sent by the treasurer of a political committee supporting write-in efforts for President Biden. The call stated that “Your vote matters in November, not today,” and mentioned that voting in the primary only benefits Republicans. The attorney general’s office noted that the voice in the robocall seemed artificially generated and urged recipients to report the calls to the Department of Justice.

The use of AI in elections is a growing concern, as there are currently no safeguards in place to prevent the dissemination of false information. Democratic U.S. Sen. Edward Markey introduced the BIAS Act, which requires federal agencies using AI to establish an office of civil rights to combat bias and discrimination. The act aims to protect marginalized communities and mitigate the dangerous effects of AI. Markey expressed concern that AI could intensify the spread of misinformation, particularly during critical events and elections.

To address the issue, Meta and Google have been developing policies to disclose the use of generative AI in political ads. Generative AI tools can create text, audio, and video content, and their use has increased since OpenAI’s release of ChatGPT. Meta implemented a policy requiring political advertisers to disclose the use of AI in ads that contain digitally created or altered photorealistic images, videos, or audio intended to deceive. Google now requires election advertisers to disclose if their ads contain digitally altered or generated synthetic content depicting real people or events.

The investigation into the fake robocall in New Hampshire highlights concerns about AI and its potential misuse in elections. The BIAS Act and the efforts by Meta and Google to regulate AI use in political ads demonstrate the need for safeguards to prevent the spread of false information and bias. As AI continues to advance, it is crucial for policymakers and tech companies to address these concerns and ensure the integrity of elections.