An Ethics Label for AI

forschung leben – the magazin of the University of Stuttgart

The question as to whether a given algorithm respects certain values, such as transparency, privacy, justice, should be answerable in future.
[Photo: AIEI Group]

The question as to whether a given algorithm respects certain values, such as transparency, privacy, justice, should be answerable in future. Artificial Intelligence has to be ethically assessable; a group of researchers, with the involvement of the University of Stuttgart, has now developed a practical proposal for this.

Whenever you purchase an LED light, a freezer, or even a car, you will almost inevitably come across an energy efficiency label, an intuitive indication of power consumption that has been mandatory for an increasing number of products within the EU for over a decade. As it can be “optimized”, this label is far from perfect. Nevertheless, it is now well established among consumers and often serves as an orientation guide.

The energy label also served – at least in part – as the inspiration for a new form of label, which states whether a given AI algorithm compies with ethical principles.

Whenever algorithms such as these make decisions, that have consequences for people, then the way in which the decision was arrived at must be comprehensible for both ethical and legal reasons.

Dr. Andreas Kaminski, Working group leader of “Philosophy of Science & Technology of Computer Simulation” at the HLRS


There are two main approaches to designing such a classification structure, one being
a set of ethical rules and is integrated into the AI models, the other being the certification of AI procedures based on ethical criteria. “When it comes to practical implementation,” says Kaminski, “both approaches are beset with fundamental problems. “In the first ap-proach, it is not possible to take account of implicit rules.” For example, the road traffic regulations do not specify how to merge in heavy traffic, which is often only possible without waiting in compliance with the rules. “The second approach invites the exploita-tion of gray areas, and the identity of the ultimate decisionmaker often remains unclear.”

Philosopher Dr. Andreas Kaminski of the HLRS, Stuttgart and his team have developed a concept for evaluating the ethics of AI algo-rithms.

Kaminski is a member of the AI Ethics Impact Group, a consortium jointly initiated by the VDE technology concern and the Bertelsmann Stiftung. This interdisciplinary group combines expertise from the fields of computer science, philosophy, technology assessment, engineering, and the social sciences.

“We have developed a practical, applicable concept for AI ethics that meets three criteria,” Kaminski explains: “First, it can be applied with pluralistic values, i.e., in different societies. Second, it always evaluates an AI application in its specific context. Third, one can understand how the valuation is arrived at.”

Making Ethical Values Measurable

In visual terms, these results are presented in a similar manner to the energy label, as an AI ethics label so to speak. “Our concept is appropriate for very different groups such as consumers, stakeholders, decision-makers, and buyers,” Kaminski explains. “If necessary, they can learn more than what is indicated on the visual display, which creates incentives for companies to actually adapt their algorithms accordingly.”

Der Kriterienkatalog listet im Einzelnen auf: Transparenz, Haftung, Privatsphäre, Gerechtigkeit, Zuverlässigkeit und Nachhaltigkeit. Sie können jeweils einen Grad von A bis G erreichen, wobei A als am besten anzusehen ist.
Das Ethik-Label zur Künstlichen Intelligenz bewertet Algorithmen. Es wurde von der Artificial Intelligence Ethics Impact Group entwickelt.

Rather than being based solely on the AI Ethics Label, which places values such as transparency, liability, privacy, equity, reliability, and sustainability in categories ranging from A to G, two additional elements are included, one of which is a model developed by the philosopher Christoph Hubig [de], which makes the aforementioned criteria measurable. Kaminski's team also worked on this. “We defined criteria for each value and identified the metrics that contribute towards those criteria,” Kaminski explains: “This enables one to take account of value conflicts and dependencies. We take a differentiated view of the relevant values, which don't have to be determined in absolute terms.” This leaves room for the evaluation of an AI aglorithm in the specific application context.

Not everything needs to be subject to exactly the same ethical regulations.

Dr. Andreas Kaminski


It makes a difference whether an AI algorithm makes a recommendation for an item of clothing based on previous purchasing behavior or makes a medical diagnosis.” This approach addresses this aspect through the inclusion of a third element, a risk matrix. “This contrasts the magnitude of the potential harm a given AI algorithm could do with the degree of reliance on the relevant algorithm in decision making,” Kaminski explains. “Risk classes can then be derived from this.”

The AI Ethics Impact Group's proposal is generating interest. The EU Parliament, for example, has been looking into it, as has the High-Level Expert Group on AI, an EU ad-visory body. The concept has also been discussed by the German Ethics Committee and the IEEE engineering association. The German Ministries of Justice and Labour are cur-rently working on a project to assess how the concept could be implemented in an em-ployment and administrative context.

Editor: Michael Vogel

Dr. Andreas Kaminski, Head of Philosophy of Science & Technology of Computer Simulation at the High Performing Computing Center (HLRS), e-mail, phone: +49 711 68565982

Philosophy of Science & Technology of Computer Simulation

Contact

 

University Communications

Keplerstraße 7, 70174 Stuttgart

To the top of the page