OpenAI, Google DeepMind hide dangers from public, employees reveal.



Employees Say OpenAI and Google DeepMind Are Hiding Dangers

TLDR:

  • A group of current and former employees at OpenAI and Google DeepMind warn against the dangers of advanced AI in a letter.
  • Employees allege that companies prioritize financial gains over oversight, leading to potential risks such as human extinction.

A group of current and former employees at leading AI companies OpenAI and Google DeepMind published a letter on Tuesday warning against the dangers of advanced AI as they allege companies are prioritizing financial gains while avoiding oversight. Thirteen employees, eleven of which are current or former employees of OpenAI, the company behind ChatGPT, signed the letter entitled: “A Right to Warn about Advanced Artificial Intelligence.” The two other signatories are current and former employees of Google DeepMind. Six individuals are anonymous. The coalition cautions that AI systems are powerful enough to pose serious harms without proper regulation. “These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction,” the letter says.

Key Points:

A group of current and former employees at leading AI companies OpenAI and Google DeepMind published a letter warning against the dangers of advanced AI as companies prioritize financial gains over oversight. They warn of risks such as human extinction, manipulation, misinformation, and loss of control of autonomous AI systems.

The group of employees allege that AI companies have information about the risks of their technology but are not required to disclose much to governments, making the real capabilities of their systems unknown to the public. Employees find themselves bound by confidentiality agreements that prevent them from voicing concerns publicly.

The letter writers have made four demands of advanced AI companies, including creating anonymous processes for employees to raise concerns and supporting a culture of open criticism. Governments have begun to regulate AI, but progress lags behind the speed at which AI is progressing.