White AI faces appear more real than actual human faces.

Key Points:

  • Artificial Intelligence (AI) can generate faces that appear more “real” to people than actual human faces, a phenomenon called “hyperrealism”.
  • This is especially relevant with the rise of deepfakes, material artificially generated to impersonate real individuals.
  • However, AI achieves hyperrealism only with white faces; AI-generated faces of color fall into the “uncanny valley”.
  • This bias could have implications for how people of color are perceived online.

A study published in the Sage journal discovered that AI-generated faces seemed more “real” to people than photos of real human faces, an observation coined as “hyperrealism” by the study’s senior author, Amy Dawel. This discovery becomes more significant in light of the increasing prevalence of deepfakes, artificially generated material designed to impersonate real individuals.

Interestingly, the study found that AI achieved hyperrealism only when it generated white faces, and AI-generated faces of color still fell into the uncanny valley. This bias could affect how these tools are constructed and how people of color are perceived online, according to Dawel.

AI algorithms, including StyleGAN2, are disproportionately trained on white faces, leading to “hyperreal” white faces. Examples of racial bias in AI systems include tools that convert regular photos into professional headshots, which often modify the skin tone and eye color of people of color.

These AI-generated images are increasingly used in marketing, advertising, and illustrations. If developed with biases, AI could reinforce racial prejudices in media, leading to severe societal implications, as explained by Frank Buytendijk, Chief of Research at Gartner Futures Lab.

The study also found that those who most frequently misidentified AI-generated faces as real were also the most confident in their judgments. Therefore, Dawel suggested the necessity for transparent and independently monitored development of generative AI. However, mitigating these risks will be an uphill task, requiring not just significant resources but also conscientious efforts from AI companies.