White House Releases Final Draft of AI Governance Policy

The final draft of the White House AI governance policy is out. Among other things, it details allowable AI uses that could impact Americans’ rights or safety; expands what agencies share in their AI use case inventories; mandates agencies to designate chief AI officers; and sets in motion a surge to hire at least 100 AI professionals into government by this summer. Here is what you need to know from the 34-page White House document:

The White House has issued a comprehensive memorandum directed at enhancing governance, innovation, and risk management for the federal use of Artificial Intelligence. This directive emphasizes the critical balance between harnessing AI's benefits and mitigating its risks, especially those impacting public rights and safety.

Here’s a simplified breakdown of the key highlights:

AI Leadership and Governance: Executive departments and agencies are required to appoint Chief AI Officers (CAIOs) within 60 days. These officers will oversee AI governance, including risk management and innovation, coordinating closely with various stakeholders across technical and policy domains.

Promoting Responsible AI Use: Agencies are encouraged to adopt AI responsibly, focusing on modernizing operations and enhancing service delivery. This involves developing enterprise strategies for AI utilization, sharing AI models and data, and removing barriers to AI use. Emphasis is placed on respecting IT infrastructure, data governance, cybersecurity, and workforce development challenges.

Managing AI Risks: New standards and practices are established to specifically address the risks associated with using AI. These include guidelines for AI that impacts safety and rights, with agencies required to follow minimum risk management practices.

Furthermore, the memorandum outlines procedures for federal procurement of AI, ensuring compliance with legal, transparency, performance, and ethical standards.

Funding and Resources: The memorandum suggests prioritizing current resources and seeking additional funding through budget processes to support AI initiatives. It emphasizes the importance of financial, human, information, and infrastructure resources in implementing AI governance and risk management practices effectively.

Innovation and Collaboration: Agencies must foster an environment conducive to AI innovation. This includes developing AI strategies, removing barriers to its use, ensuring AI talent development, and encouraging open sharing and collaboration on AI projects.

Public Safety and Rights: Special attention is given to AI applications that could impact public safety and individual rights. Agencies must conduct comprehensive impact assessments, testing, and ongoing monitoring of AI systems to prevent harm or discrimination.

Engagement and Transparency: The policy calls for engaging with the public and affected communities in the development and deployment of AI technologies. Agencies are tasked with providing clear documentation and public notice about AI uses, ensuring transparency and accountability.

This new policy underscores the administration’s commitment to leveraging AI’s transformative potential while safeguarding public interest. It sets a clear framework for responsible AI use within federal agencies, emphasizing governance, innovation, risk management, and public engagement.

Previous
Previous

The AI Summit for Technical and Non Technical Leaders

Next
Next

Key Takeaways from the Accelerating Enterprise AI Leadership Summit