Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

❮   Back to News

December 11, 2023

The Top 10 Things Federal Technology Leaders Should Know About OMB’s Draft AI Policy

By Clare Martorana, Federal CIO, and Conrad Stosz, OFCIO Director of Artificial Intelligence

Tags:

In October 2023, President Biden signed the landmark Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. We recognize that when AI is used to make decisions and take actions that have a consequential impact on the lives of individuals, the government has a distinct responsibility to identify and manage AI risks. A key action identified in the EO is for the Office of Management and Budget (OMB) to issue guidance on the Federal Government’s use of AI, positioning the U.S. to lead by example in the responsible use of this innovative technology. To deliver on this requirement, OMB issued draft AI implementation guidance Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.

The draft policy outlines a series of actions to empower Federal agencies to leverage AI to improve government services and more equitably serve the American people. Below are the top 10 most pressing questions for the Federal senior technology officials that will play an instrumental role in the policy’s future implementation.

1. What is in OMB’s proposed AI policy?

The draft guidance outlines three pillars to advance the responsible use of AI in government and proposed agency actions as outlined in the below table.

Pillar Examples of Proposed Agency Actions
Strengthen AI Governance Designate Chief AI Officers holding primary responsibility for coordinating their agency's use of AI, promoting AI innovation, and managing risks from the use of AI.
Advance Responsible AI Innovation Remove barriers impacting responsible AI use and development of agency AI strategies for achieving enterprise-wide advances in AI maturity
Manage Risks from the Use of AI Adopt minimum AI risk management practices for AI uses that impact rights and safety

Read additional details in the AI Implementation Guidance Fact Sheet.

2. What will Chief AI Officers be responsible for? How will the newly created Chief AI Officer role interact with CIOs, CDOs, and CTOs?

Chief AI Officers (CAIOs) will hold primary responsibility in their agency for coordinating their agency’s use of AI, promoting AI innovation in their agency, and managing risks from their agency’s use of AI. Agencies have flexibility to either create a brand-new position to fill this role or designate an existing official to perform the Chief AI Officer’s responsibilities—provided the official has significant expertise in AI. For CFO Act agencies, the CAIO must be a position at the Senior Executive Service, Scientific and Professional, or Senior Leader level, or equivalent. In other agencies, the CAIO must be at least a GS-15 or equivalent.

Cross-cutting work such as AI governance and risk management cannot be performed in a vacuum; Chief AI Officers will need to coordinate with other relevant officials, such as agency CIOs, CDOs, and CTOs. This is necessary for a number of reasons, but importantly, many existing teams already maintain the authorities, resources, and expertise to carry out the responsibilities identified for the Chief AI Officer. CIOs, CDOs, and CTOs will remain deeply involved in the strategic planning for, acquisition of, and delivery of AI within their agencies. The role of the Chief AI Officer will not replace their work, but rather, fill the gaps that such roles were not designed to address. This includes efforts to mitigate algorithmic discrimination and establish processes for individuals to appeal harms caused by government AI.

3. How should CFO Act agencies ensure their AI Governance Body is sufficiently engaged with existing senior forums?

OMB’s draft memorandum would require CFO Act agencies establish AI Governance Boards to convene relevant senior officials at least quarterly to govern their agency’s use of AI. AI Governance Boards must be chaired by the Deputy Secretary, or equivalent, and vice-chaired by the agency’s Chief AI Officer. The board must also include appropriate representation from senior agency officials responsible for elements of AI adoption and risk management.

Agencies would have the option to convene a new senior-level body or expand the remit of an existing governance body to meet the AI Governance Board requirements. Many agencies already convene senior officials to discuss issues tangential to AI, such as IT modernization, data governance, and privacy. Some agencies have also established groups dedicated to AI governance and innovation specifically. Rather than set up a separate body, agencies can leverage existing mechanisms—if they choose— easing the burden for implementation.

4. Will my agency need to implement the identified AI risk management requirements every time AI is used?

No. AI has been increasingly integrated in benign software applications and everyday consumer products, such as noise-cancelling headphones and auto-correcting text messages. OMB’s proposed AI risk management requirements are only triggered when government AI use cases meet the definition for safety-impacting or rights-impacting. The draft policy takes a risk-based approach to managing AI harms, ensuring agency resources are well spent on AI use cases that pose the greatest risks to the rights and safety of the public. As a rule of thumb, when AI is used to control or meaningfully influence the outcomes of consequential actions or decisions, agencies will need to implement the memorandum’s risk management requirements.

5. How do I know if my use case impacts rights or safety?

OMB’s draft memorandum identifies two broad categories of AI:

  • Rights-Impacting AI: AI whose output serves as a basis for decision or action that has a legal, material, or similarly significant effect on an individual’s or community’s civil rights, civil liberties, or privacy, equal opportunities, and/or access to critical resources or services; and
  • Safety-Impacting AI: AI that has the potential to meaningfully impact the safety of human life or well-being, climate or environment, critical infrastructure, and/or strategic assets or resources.

These categories are further expanded upon in subsection 5(b) of the guidance, where OMB identifies specific purposes for which AI is automatically presumed to be safety-impacting or rights-impacting. This list is intended to reduce uncertainty—both for agencies and for the public—on when additional safeguards are warranted.

6. How will OMB’s AI risk management requirements feed into my agency’s Authorization to Operate process?

AI is software and therefore, it is still subject to an agency’s authorization process for information systems. OMB Circular A-130, Managing Information as a Strategic Resource, directly and indirectly tasks agency CIOs with the responsibility to assess information systems for security and privacy risks. However, OMB’s draft guidance identifies a new category of risk to consider: risks from the use of AI. This primarily includes risks related to efficacy, safety, equity, fairness, transparency, accountability, appropriateness, or lawfulness of a decision or action resulting from the use of AI to inform, influence, decide, or execute that decision or action.

When looking at the memorandum’s proposed AI risk management requirements, agencies would be directed to use existing processes wherever possible, like the Authorization to Operate process, to assess, manage, evaluate, and continuously monitor this new category of risk from the use of AI. This means when agencies review safety-impacting or rights-impacting AI via their ATO process, the Authorizing Official should collaborate with the Chief AI Officer and other appropriate AI oversight officials to assess the types of risks identified in this memorandum and ensure compliance.

7. What will this policy mean for agencies’ use of generative AI?

It is critical to ensure that the use of generative AI will not cause undue risk to the public. Agencies must ensure that adequate safeguards and oversight mechanisms are in place before generative AI is used. For example, in line with EO 14110, agencies should explore limited access policies to specific generative AI services based on specific risk assessments rather than implementing across the board bans. Additionally, some agencies have already established guidelines and limitations on the appropriate use of particular AI-enabled technologies, such as for facial recognition. Similar guidelines can be written for the responsible use of generative AI.

8. What resources will be made available to help agencies with implementation?

EO 14110 identifies a few actions that will directly assist agencies with implementation of OMB’s memorandum, once finalized. This includes:

  • Guidelines, tools, and practices developed by NIST to support implementation of the minimum risk-management practices described in OMB’s memorandum;
  • Further procurement guidance from OMB to ensure that Federal AI procurement aligns with the policies in this memorandum, and a method to track agencies’ AI maturity; and
  • A national surge in AI talent to grow the Federal Government’s AI workforce capacity.

9. Would this draft policy apply to contractors?

Yes. The guidance will apply to any development, use, or procurement of AI by the Federal government or on its behalf, and pursuant to EO 14110, OMB will issue further guidance focused specifically on contractors in the coming months.

10. What happens next?

OMB collected public comments and will be reviewing recommendations on regulations.gov and publishing the comments. The next draft of the policy will be shared with the interagency council established in subsection 10.1(a) of EO 14110 before the policy’s final issuance. The final guidance is due within 150 days of the order.

❮   Back to News

CIO.gov

An Official website of the Federal Government

Looking for U.S. government information and services?
Visit USA.gov