TLDR: The Taylor Swift Deepfakes Are a Warning
Is it too early to say that, on balance, generative artificial intelligence has been bad for the internet? One, its rise has led to a flood of AI-generated spam that researchers say now outperforms human-written stories in Google search results. The resulting decline in advertising revenue is a key reason that the journalism industry has been devastated by layoffs over the past year. Two, generative AI tools are responsible for a new category of electioneering and fraud. This month synthetic voices were used to deceive in the New Hampshire primary and Harlem politics. And the Financial Times reported that the technology is increasingly used in scams and bank fraud. Three — and what I want to talk about today — is how generative AI tools are being used in harassment campaigns.
Taylor Swift Deepfakes
The subject gained wide attention on Wednesday when sexually explicit, AI-generated images of Taylor Swift flooded X. And at a time when the term “going viral” is wildly overused, these truly did find a huge audience.
Platforms and Policies
It would be a mistake, though, to consider Swift’s harassment this week solely through the lens of X’s failure. A second, necessary lens is how platforms that have rejected calls to actively moderate content have created a means for bad actors to organize, create harmful content, and distribute it at scale. In particular, researchers now have repeatedly observed a pipeline between the messaging app Telegram and X, where harmful campaigns are organized and created on the former and then distributed on the latter.
The Technology Itself
The Telegram-to-X pipeline described above was only possible because Microsoft’s free generative AI tool Designer, which is currently in beta, created the images. And while Microsoft had blocked the relevant keywords within a few hours of the story gaining traction, soon it is all but inevitable that some free, open-source tool will generate images even more realistic than the ones that polluted X this week.
Future Implications
Deepfake creators are taking requests on Discord and selling them through their websites. And so far, only 10 states have addressed deepfakes through legislation; there is no federal law prohibiting them.
Given these concerns, it is clear that this type of abuse was predicted and that platforms need to take action to address it. The rise of harmful content spread through generative AI has become a significant issue, with victims ranging from celebrities like Taylor Swift to regular individuals. With the accessibility of generative AI tools, the risk for abuse will continue to increase. Lawmakers and platforms must