TLDR:
- White House requires agencies to appoint chief AI officers and create safeguards for AI use
- New OMB policy focuses on protecting human rights and public safety in AI applications
The White House Office of Management and Budget has issued a new policy requiring US government agencies to provide human oversight to AI models making critical decisions. The policy emphasizes the need for safeguards to protect human rights and maintain public safety in AI applications across federal agencies. Agencies are mandated to hire chief AI officers and conduct annual inventories of their AI use.
The policy requires agencies to address the impact of AI on public safety and human rights, particularly in areas such as healthcare, housing, employment, and immigration. Safeguards must be implemented to mitigate algorithmic discrimination and ensure transparency in government AI use. Agencies are advised to release government-owned AI code, models, and data when it does not pose a risk to the public or government operations.
While the policy has been praised as a step towards protecting US residents from AI abuses by some groups, others have raised concerns about exceptions for national security systems, intelligence agencies, and law enforcement information. Calls for Congressional action have also been made to address gaps in the policy and ensure responsible AI use.
Overall, the OMB policy is seen as a way for the federal government to lead by example in AI governance. It encourages responsible AI use while emphasizing transparency and safeguards for public safety and human rights. The policy’s approach differs from the EU’s AI Act, focusing more on agency autonomy and accountability.