As our world becomes increasingly intertwined with artificial intelligence (AI), it is essential to develop a comprehensive regulatory framework to protect individual rights and ensure responsible AI use.
Enter the AI Bill of Rights, a vital component in addressing this transformative technology's ethical and legal challenges.
In this blog post, we will delve into what the AI Bill of Rights is, its key principles, and the future of AI regulation in the United States.
Read on for an introduction to this regulation.
The AI Bill of Rights serves as a significant framework that regulates AI and sets ethical guidelines, protecting individuals from algorithmic discrimination and promoting responsible AI use across many sectors, including law enforcement and human rights.
Developed by the Office of Science and Technology Policy, academics, human rights organizations, major corporations, and the general populace, the AI Bill of Rights aims to address the current and potential civil rights implications of AI.
As AI becomes an integral part of our daily lives, the necessity for regulation and ethical guidelines intensifies.
Unregulated AI could result in a variety of consequences, from inequality and job destruction to privacy concerns and potential threats to human rights.
Governments worldwide are responding to the rise of AI by:
The AI Bill of Rights, along with the Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EOAI) aims to have a comprehensive approach to AI governance.
The release of such regulation by the US will also likely influence many other nations to adopt a similar form of AI governance. Governments worldwide are seeking international consensus on AI regulation through this new executive order.
The US-led AI Bill of Rights could serve as a baseline for global standards and ultimately contribute to creating a more ethical and responsible AI ecosystem.
The AI Bill of Rights outlines five key principles to guide the development and deployment of AI systems, focusing on safety, fairness, privacy, transparency, and human alternatives.
These principles serve as the foundation for responsible AI use.
Here are the 5 key principles mentioned in the bill:
AI systems should be safe and effective, with pre-deployment testing, independent evaluation, and ongoing monitoring to protect users from harm.
The Safe and Effective Systems principle of the AI Bill of Rights stipulates that individuals should be safeguarded from unsafe or inefficient automated systems and the application of inappropriate data in their creation and implementation. As AI systems create more advanced solutions, it is crucial to ensure their safety and effectiveness.
With a focus on safety and effectiveness, the AI Bill of Rights aims to reduce the risks of accidents, failures, and inaccuracies associated with AI.
To prevent algorithmic discrimination, AI systems should be designed fairly, using representative data and proactive measures to ensure equity and fairness.
Algorithmic discrimination refers to the unfair treatment of certain individuals by automated systems due to biased training data. Implementing algorithmic discrimination protections can help mitigate these issues and promote fairness in AI systems.
The AI Bill of Rights implements provisions that emphasize civil rights and equity, mitigating the possibility of biased or discriminatory AI algorithms and guaranteeing a fair and just application of AI technology.
For example, Amazon's scrapped recruitment system was biased against women. The bill states that AI systems should not be biased based on gender, race, ethnicity, or other protected characteristics.
Data privacy should be respected, with user consent for data collection and use, enhanced protections for sensitive domains, and prohibitions on unchecked surveillance.
The AI Bill of Rights outlines privacy protections that prioritize design choices, safeguard against abusive data practices, and offer notice and explanations to individuals regarding the use of their data and the reasoning behind AI decisions.
With an emphasis on data privacy, the AI Bill of Rights aims to shield individuals from potential privacy violations and promote the responsible application of AI technology.
Much similar to the EU's General Data Protection Regulation (GDPR), the AI Bill of Rights enforces data protection requirements, such as requiring user consent for data collection and use.
Automated systems (in this case, AI) should provide clear notice and explanations of outcomes that impact individuals, with accessible documentation and updates on significant changes.
Some effective methods for providing explanations in an automated system include:
Ensuring transparency and accountability in AI decision-making processes is essential for fostering trust and fairness in utilizing AI technology.
AI systems should offer the option to opt out and access human or other alternative options where appropriate, focusing on accessibility, protection from harm, and timely human consideration and remedy.
The “Human Alternatives, Consideration, and Fallback” section of the AI Bill of Rights ensures that human alternatives are available and considered when making decisions with AI, allowing individuals to choose out of automated systems when necessary.
The AI Bill of Rights gives precedence to human alternatives and fallbacks, aiming to reduce the possible harms and unexpected outcomes of AI technology.
It also specifies that any problems can be quickly remedied when encountered.
To get a better understanding of the White House's AI Bill of Rights, we spoke with Irina Tsukerman, President at Scarab Rising. Irina is a human rights and national security lawyer based in New York and a Fellow at the Arabian Peninsula Institute. Among other affiliations, she is a member of the North American Society for Intelligence History, a Fellow at the Jerusalem Center for Public Affairs, and a member of the American Bar Association's Energy and Environment and Science and Technology Sections.
In this section, we've included Irina's responses to our prompts.
The AI "Bill of Rights" appears more of a political nod to social pressures and concerns about the integration of a new technological innovation and its potential and unpredictable impact on the labor market, social media, reputational management, and other aspects of contemporary culture than a serious effort to address any new "right".
From a constitutional legal standpoint, this is entirely an invention that may at best complement or supplement existing codified rights but would not likely be strictly enforceable. Rather, it is an aspirational ethical framework and advisory infrastructure by the White House policymakers struggling to reconcile existing needs and technical demands with the psychological and perception narratives around the hot-button topic. Some of the topics covered by the blueprint, which very much suggests a work in progress, are "safe and effective systems" , algorithmic discrimination protections, data privacy, notice and explanation (in other words, notifying information consumers whenever AI is used to enhance image or other types of information), human alternatives, considerations, and fallback (in other words, how to prepare labor resilience and avoid massive human unemployment by transitioning workers to the new reality through education and skill building). None of these topics are particularly objectionable as an aspiration ideal, most are already present in some capacity in other areas, few are easily resolveable or enforceable. For instance, while algorithmic discrimination will clearly have negative social response when it is demonstrably present as has happened with a few generative AI tools such as Google's Gemini, put out in the nascent stage of the development of these tools, it is hard for a government to enforce any sort of anti-discriminatory provision over a process that in itself is highly experimental and not always controllable, much less technically understood by many of the policymakers.
However, to quote the document, "These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.". In other words, the US government views this bill of rights as a guidance tool to navigate conversations around this topic rather than a very strict code which the WH itself will follow or enforce in the federal agencies, much less the private sector.
It is very clear that there is still not a very concrete listing of specific "best practices" for reaching the desirable outcome, which means these discussions with the relevant stakeholders can be expected to be ongoing no matter who ends up in the White House in November or in the near future. The question here is not whether this is "necessary", but whether the USG/executive branch can avoid being involved in this issue in some capacity.
It is clear that various actors will be putting various forms of pressure on the decisionmakers to get involved; this aspirational set of guidelines is the best possible compromise between policymakers getting overly involved in an area they don't quite understand and that is rapidly evolving and being accused of not providing sufficient regulation and keeping potentially dangerous new tools in check.
Because there is no real enforcement mechanism, this bill of rights is more like a tool that different sectors and industries can interpret in a way that makes sense but at the same time provides an idea of red lines from the liability perspective. Over time, as the area becomes more settled and legal issues arise with more precedents, there is much more likely going to be more of an effort to codify specific policies and restrictions around the use of AI and the protections of the public from both government and private sector abuses. However, at the moment, it should be viewed as nothing more than a conversation starter or a jumping off point for more serious deliberations that will evolve in various directions over time.
AI regulation’s future is expected to include federal AI initiatives in the US, state and local AI laws in the US, and the possibility of international regulation.
While no federal law currently limits AI use or protects citizens, federal guidelines and protections exist, such as the Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.
This executive order mandates that AI systems developers share safety test results with the U.S. government if they demonstrate the potential for risk to national security and requires several federal agencies to develop guidelines and standards for AI safety and security.
US states are creating laws to regulate specific AI-related issues, such as Colorado’s regulation of insurers’ use of big data and California’s bill to ban chatbots for influencing votes without disclosure.
These state-level AI laws demonstrate the growing recognition of the importance of AI regulation and the need for tailored solutions to address the unique challenges and risks associated with AI technology.
More such regulations from other countries are expected in the future, with the European Union’s AI Act being a notable example. Aiming for safe, transparent, traceable, non-discriminatory, and environmentally friendly AI systems, this act consists of strict regulations on AI within the region.
Another example would be China's own set of AI regulations to come. China's Cybersecurity Administration of China (CAC) has issued the official Interim Administrative Measures for Generative Artificial Intelligence Services, now the “Generative AI Measures”, which provides guidance on developing generative AI systems.
In conclusion, the AI Bill of Rights represents a crucial step in addressing the ethical and legal challenges posed by AI in the United States. The AI Bill of Rights also provides a foundation for future AI regulation in the United States and internationally.
As we continue to grapple with the implications of AI in our daily lives, we must prioritize protecting individual rights and ensure the responsible and ethical use of this transformative technology.
See an error or have a suggestion? Please let us know by emailing ssg-blogs@splunk.com.
This posting does not necessarily represent Splunk's position, strategies or opinion.
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.