AI may replace human research participants, scientists see potential risks.


Artificial Intelligence (AI) is being considered as a replacement for human research participants in some scientific studies. While this could save time and effort, there are concerns about the quality and ethics of such research methods. AI-generated data may not accurately reflect human experiences and could perpetuate biases and stereotypes. There is a need for careful thought and cross-field cooperation to establish guidelines for the responsible use of AI in research.

In a new preprint paper accepted for the Association for Computing Machinery’s Conference on Human Factors in Computing Systems, researchers have reviewed proposals that suggest using large language models (LLMs) to stand in for human research subjects or analyze research outcomes. While there are potential benefits such as speed, cost reduction, risk avoidance, and increased diversity in research, there are also significant concerns raised by experts in the field.

Many scientists are skeptical about using AI-synthesized research data, as AI language models may not accurately reflect human-like responses. AI-generated data could also weaken the quality of human study data, as some human participants may already be using generative AI to complete tasks on platforms like Amazon’s Mechanical Turk.

Ultimately, experts emphasize the importance of establishing guidelines and guardrails for the use of AI in research to ensure the quality and integrity of scientific studies. There is a need for international collaboration and expertise from various fields to address the ethical and methodological challenges posed by AI-generated research data.