The EU is currently developing one of the world’s first comprehensive regulations on Artificial Intelligence. Initially proposed in April 2021, the draft AI Act is now entering its last stage of negotiations, with the stated aim by policymakers to agree on a final text before the end of the year. Given the scope of the Regulation, and its likely impact in the EU and beyond, it’s an opportunity to review some of the key issues still in discussion and what they could mean for AI adoption and innovation in Europe.
Splunk uses AI to accelerate human decision making in service of incident detection, investigation and response. Min Wang, Splunk Chief Technology Officer, mentioned in a recent blog: “Productivity and efficiency can drastically increase by freeing users from basic tasks and allowing them to focus on higher-value initiatives. We believe the benefits of AI far outweigh the downsides and are increasing our investments in taking our trusted AI capabilities even further.” For example, the new Splunk AI Assistant (preview) uses generative AI to provide a chat experience that helps customers author and learn Search Processing Language by interacting in plain English.
Splunk’s approach to AI is domain-specific (with a number of security and observability use cases), open and extensible (easily integrated with third-party frameworks) and based on human oversight.
At Splunk we closely follow policy and regulatory developments around AI, and the EU AI Act particularly stands out. As one of the first attempts to regulate AI, we think it may well inspire other jurisdictions in the near future. It is of particular interest to us to understand how some of the principles of the EU AI Act could be integrated into a future EU-US AI Code of Conduct.
Since the proposal came out in 2021, we have rarely seen so much excitement and passion for a legislative text in Brussels. In the European Parliament, two committees have been jointly responsible for the Regulation (led by two co-rapporteurs, Brando Benifei, Italian Socialist and Dragoș Tudorache, Romanian Liberal) and five other committees provided opinions. AI is everyone’s business in the European Parliament. In the lead committees, 3,312 (!) amendments were tabled in June 2022, and it took a year for the Parliament to go through all amendments and adopt its position.
In the Council, discussions have progressed faster. Under the impulse of the French EU Presidency (July-December 2022), the Council adopted its general approach in December 2022. Both institutions are now ready to start ‘trilogues’ with the European Commission - these are negotiations behind closed doors where policymakers go through the text article by article and try to agree on a compromise position. The Parliament’s position was the result of a delicate compromise between political groups, so the co-rapporteurs have very little room for maneuver in the ongoing negotiations.
To date, two trilogues have already taken place (in July) and two other trilogues will take place on 2-3 October and 25 October. There is pressure on the Spanish EU Presidency and the Parliament rapporteurs to conclude negotiations by the end of October.
Since the proposal came out, the European Commission’s ambition has remained the same: regulating the risky uses of AI - for health and safety and for fundamental rights - without putting in question Europe’s capacity to innovate with AI. This is challenge #1.
The Commission also aimed to develop a horizontal framework, with common rules that would apply across all sectors, whilst taking some sectoral specificities into consideration. This is challenge #2.
The EU institutions have also tried to deliver a future-proof regulation, able to keep up with technological developments. This is challenge #3.
Finally, the AI value chain is complex: different actors are involved in designing, developing and deploying the AI systems. Any obligation should fall on the entity that is best positioned to mitigate the risk of harm associated with a given AI system. Aiming for the right balance of responsibilities is challenge #4.
I see these challenges as proper tests for the evaluation of the final AI Regulation’s efficiency and performance. As we review the issues to be discussed in trilogues, we can ask ourselves how the final text will pass these four tests. See our scorecard at the end of this blog for a preliminary assessment!
What is AI? There are as many definitions as people or organisations trying to define it. The OECD argues that “an AI system is a machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives.” The UK Government prefers to define AI as “the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.”
The European Commission initially proposed a definition of AI (in Annex I) that was considered too broad, as it covered techniques that were not always AI, but ‘just’ traditional software. Both Parliament and Council have proposed to delete Annex I. The European Parliament has in the meantime proposed a definition under Article 3(1) that is aligned with definitions proposed by the OECD, and can remain future-proof: “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.” The OECD is working on a new AI definition (to be ready by October/November), which should impact the EU negotiations on the topic.
Some AI systems are considered to present an unacceptable level of risk and will therefore be banned by the upcoming regulation. For example, it has been agreed by Member States and Parliament that social scoring systems shall be banned. The final list of prohibited practices will however be a very political issue. Member States wish to retain the possibility of using ‘real-time’ remote biometric identification systems for law-enforcement purposes (if certain conditions are met), whilst the European Parliament wants to prohibit their use in public spaces.
The Parliament believes that predictive policing systems (based on profiling, location or past criminal behaviour) shall also be banned, as well as emotion-recognition systems in the areas of law enforcement, border management, in workplace and education institutions. The Parliament’s position on prohibitions is deemed unacceptable by Member States, who want to remain free to use some AI systems for law enforcement purposes.
High-risk AI systems will be permitted, but subject to compliance with AI requirements and ex ante conformity assessment. High-risk AI systems are listed under Annex III of the AI Regulation. There is broad agreement on the critical use cases listed there, although the European Parliament added some use cases such as AI systems “used for influencing the outcome of an election (...) or the voting behaviour of natural persons”.
In addition, Article 6(2) intends to refine this list by providing a definition of what constitutes “high-risk”. The European Parliament insists on “the significant risk of harm to the health, safety or fundamental rights of natural persons”, whilst the Council proposes to consider all use cases listed under Annex III as high-risk “unless the output of the system is purely accessory and is not therefore likely to lead to a significant risk to the health, safety or fundamental rights”. We support the Parliament’s approach under Article 6(2), which provides more certainty.
The compliance requirements for high-risk AI systems are not subject to much debate between Member States and parliamentarians. The final list of obligations has already been mostly agreed and should therefore include requirements in the area of data governance, technical documentation, record-keeping (logging), transparency, human oversight, accuracy, robustness and cybersecurity.
We however expect discussions on the Parliament’s request to introduce a Fundamental Rights Impact Assessment for high-risk AI systems before they are placed on the EU market, as this is deemed to go too far for some national delegations.
As mentioned before, the AI supply chain is complex. Since 2021, Member States and MEPs have been looking for ways to strike the right balance of responsibilities amongst supply chain actors. The Parliament proposes a reasonable approach in its amendments to Article 28: any distributor, importer, deployer or other third-party shall be subject to the compliance obligations of the provider if it places a non high-risk AI system into a high-risk setting. In short, the entity that decides the destination of the AI system is considered as the ‘new’ provider and responsible for compliance. The ‘original’ provider shall however provide the ‘new’ provider with the technical documentation necessary to meet the requirements and comply with the obligations. We think this approach makes sense and hope it will be retained in the final text.
Negotiations on the AI Act have reflected market and technology developments. Whilst 2022 was all about regulating “general purpose AI” (see new Title IA of the Council’s General Approach), 2023 has been very much the year of generative AI and AI foundation models following the very rapid adoption of ChatGPT (100 million users only 2 months after its launch!). The European Parliament has proposed to regulate foundation models irrespective of the risk they present, going against the European Commission’s initial risk-based approach. The Council is yet to decide if the focus should be on putting safeguards into place for foundation models or General Purpose AI (or both).
According to the Parliament, a foundation model is "an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks". The Parliament also wants foundation models to meet certain requirements before they are placed on the EU market: risk mitigation; data governance; ensuring levels of “performance, predictability, interpretability, corrigibility, safety and cybersecurity”; energy and resource efficiency of the model; technical documentation; quality management system; registration of the model.
Some of these requirements go very far, such as the identification and mitigation of “all reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law”. It looks very challenging for providers to predict and mitigate those risks, as these foundation models are general in design. Ensuring the performance of such models during their entire life cycle also looks close to impossible, as the original provider will have little control over how the model is deployed by third parties.
Council and Parliament negotiators will have to agree on workable requirements for (large and small) developers and deployers of foundation models, or these requirements could become an impediment to AI design and deployment in Europe. Kai Zenner, Digital Policy Adviser to Axel Voss MEP has proposed to apply these requirements only to systemic foundation models, i.e. foundation models deemed to be “systemically relevant”, based on metrics such as the amount of money invested in the model or the amount of compute usage.
The European Parliament has also proposed specific rules for generative AI. Generative AI systems based on foundation models, like ChatGPT, would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones) and ensure safeguards against generating illegal content. Detailed summaries of the copyrighted data used for their training would also have to be made publicly available.
Some Member States like France have voiced concerns about such additional obligations for foundation models and generative AI and are concerned about the impact of overregulation on AI innovation in Europe.
To conclude, let’s not forget the provisions that shall foster AI innovation. They come towards the end of the text and unfortunately represent only a very small section of the Regulation (only 2 or 3 articles out of 85)… The principle of sandboxes is well known: innovative AI systems can be developed, trained, tested and validated under supervision of a competent authority before they are placed on the market.
There were two conflicting visions on sandboxes: Member States wanted the establishment of AI regulatory sandboxes to be voluntary at national level, whereas the European Parliament wanted each Member State to establish at least one national AI sandbox by the application date of the Regulation. Some Member States have progressed faster than others: Spain already launched its AI regulatory sandbox in 2022.
We understand that an agreement has been reached already on this point: sandboxes will be mandatory, but Member States that are unable to develop their own sandbox will be allowed to join other Member States’s sandboxes or jointly establish one.
In this scorecard we aim to assess how the AI Regulation will likely pass the four ‘tests’ described at the beginning of this blog.
Will the AI Regulation pass the 4 tests?
I hope you found this review of AI issues useful. If you want an insider’s view of the key questions to be addressed in trilogues, take a look at this interview with Kai Zenner here.
I will aim to update this scorecard when the final text of the AI Regulation is agreed upon, probably towards the end of the year. Stay tuned!
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.