More news about Artificial Intelligence (AI)? We know. It’s hard to avoid the chatter — and that’s for good reason.
The rise of AI has many people excited for things to come. But many others are, quite understandably, concerned about the ethical implications of this powerful technology. Fortunately, the Biden Administration is working to address the concerns of the American people by governing the development and use of AI.
Executive Order (EO) 14110 — on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence — was issued on October 30, 2023. (Executive orders are published directives that manage operations of the U.S. federal government. Only a sitting President of the United States may issue.)
A similar EO, Executive Order (EO) 13960 was issued on December 8, 2020, entitled “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government”. It states that agencies must “design, develop, acquire, and use AI to foster public trust and confidence while protecting privacy, civil rights, civil liberties, and American values.”
While EO 13960 focused on the federal government, EO 14110 applies to all industries and sectors. As stated in section one of EO 14110:
Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure.
At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security. Harnessing AI for good and realizing its benefits requires mitigating its substantial risks.
This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.
(Related reading: AI governance & AI ethics.)
The EO itself is lengthy. Below are eight primary policies and principles that EO 14110 lays out.
This one is a given. The EO explains that testing and evaluations, including post-deployment performance monitoring, will help ensure that AI systems…
Safe and secure AI is especially important in biotechnology, cybersecurity, critical infrastructure, and other national security matters.
(Read about AI TRiSM: AI Trust, Risk, and Security Management & how Splunk supports the nation’s cybersecurity mandates.)
This effort requires investments in AI-related education, training, development, research, and capacity, while simultaneously tackling novel intellectual property questions and other problems to protect inventors and creators.
As AI creates new jobs and industries, the EO emphasizes that all workers need a seat at the table, including through collective bargaining, to ensure that they benefit from emerging opportunities.
AI is created by human beings, who all carry some level of prejudice — whether conscious or unconscious. This means that AI runs the risk of discriminating or perpetuating harm against people of color and other marginalized communities.
The Biden administration is committed to building on the important steps that have already been taken — such as issuing the Blueprint for an AI Bill of Rights, the AI Risk Management Framework, and Executive Order 14091 (Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government) — in seeking to ensure that AI complies with all Federal laws. They also aim to promote:
(Related reading: risk management frameworks.)
In times of rapid and monumental change, consumer protections are more important than ever. The use of AI does not excuse organizations and businesses from their legal obligations.
The Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other potential harms from AI.
Artificial Intelligence makes extracting, re-identifying, linking, inferring, and acting on sensitive information about people’s identities, locations, and behavior easier. This increased access elevates the risk of personal data being exploited and exposed.
To combat this risk, the Federal Government will ensure that the collection, use, and retention of data is lawful, secure, and mitigates privacy and confidentiality risks.
(Understand data lifecycle management & confidentiality.)
The Biden Administration states it will take steps to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines — including technology, policy, managerial, procurement, regulatory, ethical, governance, and legal fields — and ease AI professionals’ path into the Federal Government to help harness and govern AI.
The Biden Administration will engage with international allies and partners in developing a framework to:
The Federal Government will seek to promote responsible AI safety and security principles and actions with other nations, including our competitors, while leading key global conversations and collaborations to ensure that AI benefits the whole world — rather than exacerbating inequities, threatening human rights, and causing other harms.
(Learn about Splunk’s AI philosophy & watch the on-demand webinar.)
All those policies and principles are...a lot. So, what happens now? At least three things:
The president is ordering the National Security Council and White House Chief of Staff to develop a National Security Memorandum directing further AI and security actions.
Unlike the EO, the memorandum (memo) will be brief, straightforward, and easy to read. It provides an action plan with specific next steps to put the EO’s policies into action.
The executive order requires several federal agencies to appoint a chief artificial intelligence officer (CAIO). Several departments have already appointed a relevant officer (either before the order or shortly after its release), including the National Science Foundation, Homeland Security, Health and Human Services, Defense, and Education.
(Know the differences: CIOs vs. CISOs vs. CPOs.)
Several departments have mandated responsibilities according to the EO.
Stay tuned for more happenings surrounding EO 14110 — the waves of AI change have only just begun.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.