Artificial intelligence (AI) can absolutely benefit the United States government — but that use of AI comes with unique risks and considerations.
With that in mind, Executive Order (EO) 13960 was issued on December 8, 2020. Let’s find out what this EO is about.
An executive order is a directive from the President of the United States that manages operations of the federal government. It is not an official law, however, an EO does have “the force of law”. Executive Orders are numbered consecutively.
Executive Order 13960 states that U.S. federal agencies must “design, develop, acquire, and use AI to foster public trust and confidence while protecting privacy, civil rights, civil liberties, and American values.”
Though this EO applies across all federal agencies, this order specifically does not apply to:
AI used in defense or national security systems
AI embedded within common commercial products, such as word processors or navigation systems
AI research and development (R&D) activities
(Related reading: Executive Order (EO) 14110: Safe, Secure & Trustworthy AI.)
As stated in section one of the EO:
“[Federal] agencies are encouraged to continue to use AI, when appropriate, to benefit the American people. The ongoing adoption and acceptance of AI will depend significantly on public trust. Agencies must therefore design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values, consistent with applicable law and the goals of Executive Order 13859.
“Certain agencies have already adopted guidelines and principles for the use of AI for national security or defense purposes, such as the Department of Defense’s Ethical Principles for Artificial Intelligence (February 24, 2020), and the Office of the Director of National Intelligence's Principles of Artificial Intelligence Ethics for the Intelligence Community (July 23, 2020) and its Artificial Intelligence Ethics Framework for the Intelligence Community (July 23, 2020). Such guidelines and principles ensure that the use of AI in those contexts will benefit the American people and be worthy of their trust.
Though numerous, the principles outlined in the EO are straightforward. It states that when designing, developing, acquiring, and using AI in federal agencies, it must be:
Lawful and respectful of our Nation's values
Purposeful and performance-driven
Accurate, reliable, and effective
Safe, secure, and resilient
Understandable
Responsible and traceable
Transparent
Accountable
(Read more about ethics in AI & governance for AI.)
The order specified what and how they expect each federal agency to adhere to the principles above, giving them between 45 and 180 days to comply. Here are the general steps, and the primary agencies involved, in the rollout of EO 13960.
The Director of OMB (Office of Management and Budget) was asked to post a public roadmap for the policy guidance that OMB intends to create or revise to better support the use of AI.
The Federal Chief Information Officers Council (CIO Council) was asked to identify and provide guidance on the criteria, format, and mechanisms for agency inventories of non-classified and non-sensitive use cases of AI by agencies.
Within 180 days of the CIO Council's completion of the directive above, and annually thereafter, each agency must prepare an inventory of its non-classified and non-sensitive use cases of AI, including current and planned uses.
Within 120 days of completing their respective inventories, agencies were asked to develop plans either to achieve consistency with this order for each AI application or to retire AI applications found to be developed or used in a manner that is not consistent with this order. These plans must be approved by the agency-designated responsible official(s).
Within 60 days of the completion of their respective inventories of use cases of AI, agencies were asked to share their inventories with other agencies, to the extent practicable and consistent with applicable law and policy, including those concerning the protection of privacy and sensitive law enforcement, national security, and other protected information.
Within 120 days of the completion of their inventories, agencies were asked to make their inventories available to the public, to the extent practicable and by applicable law and policy, including those concerning the protection of privacy and of sensitive law enforcement, national security, and other protected information.
The Presidential Innovation Fellows (PIF) program, administered by the General Services Administration (GSA) in collaboration with other agencies, was asked to:
Identify priority areas of expertise.
Establish an AI track to attract experts from industry and academia.
These PIF experts will work within agencies to further the design, development, acquisition, and use of AI in Government, consistent with this order.
(Explore common AI frameworks, including Splunk’s Trustworthy AI Principles.)
The Office of Personnel Management (OPM) was asked to create an inventory of Federal Government rotational programs and determine how these programs can be used to expand the number of employees with AI expertise at the agencies.
OPM was asked to issue a report with recommendations for how the programs in the inventory can be best used to expand the number of employees with AI expertise at the agencies.
The EO outlined clear requirements for federal agencies utilizing AI, but according to the U.S. Government Accountability Office (GAO), they haven't all been met.
In December, GAO made 35 recommendations to 19 agencies. Specifically, GAO is recommending that:
15 agencies update their AI use case inventories to include required information and take steps to ensure the data aligns with guidance.
OMB, OSTP, and OPM implement AI requirements with government-wide implications, such as issuing guidance and establishing or updating an occupational series with AI-related positions.
12 agencies fully implement AI requirements in federal law, policy, and guidance, such as developing a plan for how the agency intends to conduct annual inventory updates; and describing and planning for regulatory authorities on AI.
What was the outcome of the GAO’s work? Generally, it was well received. Ten agencies agreed with their recommendations; three partially agreed with one or more recommendations. Of the remaining agencies, four neither agreed nor disagreed; and one did not agree with its recommendation. OMB agreed with one recommendation but disagreed with another because it “had taken recent action.”
Despite these outcomes, more recent Executive Orders for AI are beginning to have an impact beyond only the federal government.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.