General Publications June 26, 2019

“EU Ethics Guidelines for AI Are Just the Beginning,” Law360, June 26, 2019.

Extracted from Law360 

The EU recently took the first step toward potential direct regulation of artificial intelligence. In particular, on April 8, 2019, the European Commission High-Level Expert Group on Artificial Intelligence released the final version of its "Ethics Guidelines for Trustworthy Artificial Intelligence."[1]

The guidelines, although not legally binding, are important because they represent the first significant government-initiated effort to influence the use of AI systems. The principles within the initial set of guidelines could form the basis of AI regulation in the EU and elsewhere. Indeed, in the initial communication to the European Parliament, the EC expressly contemplates in the future an appropriate “ethical and legal framework”;[2] the guidelines represent a first attempt at the ethical framework.

Why the Guidelines?

Over the past year, the European Union has significantly increased its efforts to foster the EU’s competitiveness in developing and implementing artificial intelligence. In April 2018, the EC presented a comprehensive strategy on AI for Europe that focused on increasing public and private investment to €20 billion per year for the next decade, preparing for socioeconomic challenges and ensuring an appropriate ethical and legal framework.[3]

As part of these efforts, the HLEG was created by the European Commission, and the HLEG published the guidelines setting out a framework for a “human-centric, rights-based approach” to AI.

The Structure of the Guidelines

The guidelines focus on creating “trustworthy” AI. The guidelines set forth the framework for trustworthy AI in three parts: (1) defining guiding ethical principles based on EU fundamental rights, (2) defining key requirements based on those guiding principles, and (3) providing an assessment list to aid in operationalizing the key requirements. We will quickly discuss each of these in turn.

Guiding Ethical Principles

The guidelines identify four foundational ethical principles: (1) respect for human autonomy, (2) prevention of harm, (3) fairness and (4) explicability. While all these principles are broad-based and open to significant interpretation, they are expressly rooted in the EU treaties, the EU charter and international human rights law.

The Seven Requirements for Trustworthy AI

The guidelines then distill the four ethical principles into seven technical and nontechnical requirements for the implementation of trustworthy AI. The HLEG’s intent is that these seven principles be employed when any AI-based system is implemented. The guidelines emphasize that the requirements should be embedded throughout the AI system’s life cycle using technical and nontechnical methods, which means continuous oversight including human intervention and discretion. While styled as “requirements,” since these are nonbinding guidelines, another way to think of these would be as touchstone considerations when deciding how to implement AI. These seven requirements are:

  • Human Agency and Oversight: AI systems must meet human aims, with human consent and with the ability of humans to exercise a degree of control and discretion in the processing of the AI-based system.
  • Technical Robustness and Safety: AI systems must be designed in a manner that is resilient to attack, includes a fallback plan, minimizes inaccuracy and maximizes reliability and reproducibility.
  • Privacy and Data Governance: AI should be designed to “guarantee” privacy and data protection throughout its life cycle.
  • Transparency: AI should be transparent and explainable and should not represent itself as human.
  • Diversity, Nondiscrimination and Fairness: AI systems should not suffer from “unfair bias” and should be built from an inclusive design process.
  • Societal and Environmental Well-being: AI should benefit society as a whole, including through sustainability and ecological responsibility.
  • Accountability: AI systems should be auditable and have a degree of human accountability standing behind them.

Due to the breadth of the requirements, there will inherently be tradeoffs between various requirements, which the guidelines acknowledge. For instance, predictive policing may lead to tensions between societal and environmental well-being (reduction of crime) and human agency and oversight (infringement of individual liberty and privacy). How these tradeoffs will ultimately be managed in any to-be-developed legal framework is not clear.

Assessment List and Pilot Program

Despite starting high-level and principles-based, the HLEG did include in the guidelines an implementable pilot assessment list for trustworthy AI. The assessment list is primarily addressed to developers and deployers of AI systems that directly interact with users. The assessment list includes compliance planning tools such as impact and risk assessments, accountability tools and breach/system failure response plans.

Current and Future Issues

The guidelines are a significant first step to developing a legal framework to evaluate AI. Below are some considerations and trends as this legal framework develops.

Direct Regulation of AI?

It seems from the European Commission’s original 2018 communication that the initial development of ethical guidelines would be followed in due time by legal requirements. In many ways, the guidelines (and the tensions between them) demonstrate the difficulty of having any generalized artificial intelligence regulation that provides clear guidance to the industry on what is (and is not) implementable. Put another way, use cases for artificial intelligence cut across all industries, and a one-size-fits-all approach without industry context and specific facts would seem particularly difficult to manage (e.g., health care diagnosis AI should be regulated differently than AI-based email assistance programs).

On the other hand, in the U.S. at least, there is not (at present) much momentum for broad-spectrum AI regulation. That said, a few laws have been enacted relating to AI that are industry and use-case specific, such as California’s bot law[4] (which requires disclosure of the use of automated chatbots) or the Illinois AI Video Interview Act[5] (where use of AI to evaluate interview videos is prohibited, absent disclosure).

Privacy and Data Protection “Guarantee”

Privacy and data protection law has been the most directly impacted by the development and use of AI. From enabling individuals to be reidentified to creating algorithms that are not understandable or transparent, to profiling individuals to predict their behavior (with questionable accuracy), AI is of particular concern to the fundamental rights to privacy and data protection. Given the autonomous component of AI, AI algorithms and systems must be designed, developed and continually reviewed with privacy and data protection in mind.

The guidelines recognize that the EU’s comprehensive General Data Protection Regulation[6] plays a part in AI’s legal framework since it regulates certain dimensions of AI such as an individual’s right not to be subject to automated decision-making. However, compliance with the GDPR is just a start since the GDPR does not address all the privacy, data protection and ethical aspects of trustworthy AI. For example, under the GDPR, one must implement appropriate technical and organizational security measures, which leaves open to interpretation what is “appropriate” in the vast landscape that is AI.

Finally, artificial intelligence is fueled by access to, and processing of, significant amounts of data. This is directly at odds with the privacy concepts of data minimization, retention and deletion.

Difficulties With Transparency, Causation and Explicability

Of all the guidelines, the requirement of “transparency” is perhaps the toughest for mainstream AI systems to achieve at present. For instance, commonly used machine learning algorithms based on neural networks are nearly inexplicable once the hidden layers are revealed and the math underlying the equations is shown. In other words, while the results of the algorithm may provide startling success in drawing conclusions from a dataset, the “how” can be nearly impenetrable.

While there are certainly ways to back into what is happening, or to correlate inputs with results, that is merely seeing correlation without understanding causation. For mission-critical or high-risk applications, such as health care, being unable to explain or assess how an algorithm draws conclusions, the scientific basis for the conclusions or the underlying logic of the decision may rule out these types of algorithms.

Defining AI?

For AI to be regulated, it must be defined. “Artificial intelligence” is a vague concept, and the definition used in the guidelines is necessarily vague as well. The European Commission initially defined artificial intelligence as “a system that display[s] intelligent behavior by analyzing their environment and taking action — with some degree of autonomy — to achieve specific goals.”

The HLEG eventually determined that this high-level statement was not enough and published in parallel with the guidelines a nine-page paper discussing elaborations to the definition of AI.[7] However, the essence of the original European Commission definition remains, with the most significant change being the addition of the requirement that artificial intelligence adapt its behavior by analyzing prior actions.

What’s Next?

The guidelines demonstrate the EU’s commitment to being at the forefront of creating an ethical and legal framework for AI. However, the guidelines also reflect the complexities of such a task.

This summer, the HLEG will launch a formal piloting phase of the assessment list involving stakeholders from the public and private sector to test the operationalizing of the seven requirements. This will be a pivotal point in the EU’s determination of what may be implemented, and likely regulated, at a horizontal level and in which areas a complementary or supplemental sectorial approach makes sense. The HLEG has already indicated its willingness to recommend that regulation be revised, amended or adopted in its second deliverable addressing AI policy and investment recommendations. However, the nuanced approach necessary to legislatively and ethically address AI means we are just at the beginning of a long road ahead.


The opinions expressed are those of the author(s) and do not necessarily reflect the views of the firm, its clients, or Portfolio Media Inc., or any of its or their respective affiliates. This article is for general information purposes and is not intended to be and should not be taken as legal advice.


[1] High-Level Expert Group on Artificial Intelligence (2019) Ethics Guidelines for Trustworthy AI.

[2] See European Commission (2018) Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Artificial Intelligence for Europe, Section 3.3.

[3] See generally id.

[4] Cal Bus & Prof Code § 17940- 17943.

[5] Ill. HB 2557, 101st Gen. Assembly (2019).

[6] Regulation (EU) 2016/679 (General Data Protection Regulation).

[7] See High-Level Expert Group on Artificial Intelligence (2019) A Definition of AI: Main Capabilities and Disciplines.

 

 

Meet the Authors
Media Contact
Alex Wolfe
Communications Director

This website uses cookies to improve functionality and performance. For more information, see our Privacy Statement. Additional details for California consumers can be found here.