OpenAI’s safety plan empowers board to undo decisions with AI.

OpenAI, the AI company backed by Microsoft, has published a plan outlining its approach to AI safety. The plan includes a framework for addressing safety concerns in its most advanced models, as well as an advisory group that will review safety reports and have the authority to reverse safety decisions made by executives. OpenAI will only deploy its latest technology if it is deemed safe in specific areas such as cybersecurity and nuclear threats. The move comes after concerns were raised about the potential dangers of AI, particularly in terms of spreading disinformation and manipulating humans. A group of AI industry leaders and experts called for a pause in developing AI systems with more power than OpenAI’s GPT-4, citing potential risks to society. A Reuters/Ipsos poll found that over two-thirds of Americans are worried about the negative effects of AI, with 61% believing it could threaten civilization.