TLDR: Election Wave And AI Disinformation Raise Stakes In 2024
With elections due in countries representing half the world’s population and new technologies turbo-charging disinformation, 2024 will be a major stress test for politics in the age of AI.
Key Points:
- 2024 labeled a “make-or-break” year for democracy
- Voters in Taiwan backed Lai Ching-te despite a disinformation campaign orchestrated by China
- Generative AI threatens to exacerbate polarization and loss of trust in mainstream media
- The World Economic Forum ranks disinformation as the number one threat over the next two years
- AI-powered disinformation deployed by Russia, China, and Iran to shape and disrupt elections in rival countries
- Repressive regimes may use the threat of disinformation to justify greater censorship and rights violations
- Governments are working to introduce legislation to target disinformation, but progress is slow compared to AI advancements
- Tech firms have introduced their own initiatives to fight disinformation
2024 has been labeled a “make-or-break” year for democracy, with crucial votes due in more than 60 countries. The first major test of how to survive an onslaught of AI-powered disinformation has already taken place in Taiwan, where voters backed Lai Ching-te for president despite a massive disinformation campaign against him orchestrated by China. Generative AI is threatening to exacerbate polarization and a loss of trust in the mainstream media. The World Economic Forum (WEF) has ranked disinformation as its number one threat over the next two years, warning that undermining the legitimacy of elections could lead to internal conflicts, terrorism, and even “state collapse” in extreme cases. AI-powered disinformation is being deployed by groups linked to Russia, China, and Iran, with campaigns aimed at undermining rival countries’ elections. Repressive regimes could also use the threat of disinformation to justify greater censorship and rights violations.
States hope to fight back with legislation, but they are working at a slow pace compared to the exponential progress in AI. Tech firms have introduced their own initiatives to combat disinformation, but skeptics are unsure about their effectiveness. Governments are introducing legislation requiring platforms to target disinformation and remove illegal content, but progress is slow. China and the EU are both working on comprehensive AI laws, but they will take time to complete. US President Joe Biden issued an executive order on AI safety standards, but critics say it lacks teeth and some lawmakers fear over-regulation will harm their tech industry. Tech firms have introduced initiatives such as content verification tools, but they increasingly rely on AI for verification, which may not be the best way to understand hostile strategies.