Symbolic picture artificial intelligence

Date: March 25, 2020, No. 21

Preventing incorrect decisions using AI

NoBias consortium with participation of the University of Stuttgart investigates bias in artificial intelligence
[Picture: pixabay/ Gerd Altmann]

Whether credit reports, car insurance, or medical tests: Artificial intelligence (AI) is currently being used for decisions that have major consequences for individuals and society as a whole. However, the data may be interpreted incorrectly or unfairly by the underlying algorithms, thus leading to discriminatory decisions. Detecting and preventing such bias is the goal of the European research project NoBIAS (Artificial Intelligence without Bias) in which Dr. Steffen Staab, Chair of “Analytic Computing” and Cyber Valley Professor at the University of Stuttgart is involved.

In a credit rating report, for example, artificial intelligence is used to estimate whether a loan can be successfully repaid or not. Data such as the salary of the individual concerned is part of the decision criteria. However, if this is evaluated automatically, a seemingly plausible criterion could lead to an unfair decision. For example, because women earn less on average, an automated decision might result in them being denied a loan – even though they most likely would have paid it off. In other areas, such bias could lead to people being denied a job, medical treatment, or certain information.

Such bias can occur at all stages of AI-based decision-making processes: when data is collected, when algorithms transform data into decision making capacity, or when decision making results are used in various functional areas. In this context, the scientists in NoBias are researching and developing methods that enable unbiased decisions based on AI. First, they start with the large quantity of data and check it can be used as a fair basis. Second, they focus on powerful algorithms driven by machine learning methods. Finally, they try to explain the decisions of the AI and make them transparent. The ultimate goal is to identify biased and discriminatory AI decision making and to offer solutions that allow the potentials of AI to be utilized and ensure compliance with legal and social norms. The researchers therefore go beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in AI algorithms.

The NoBIAS consortium comprises researchers from eight organizations in five European countries. This interdisciplinary consortium combines expertise in the fields of AI, law, and sociology. The network is complemented by 10 associated non-academic partners with various functions, including banks, insurance providers, and pharmaceutical companies.

Prof. Steffen Staab, University of Stuttgart, Institute for Parallel and Distributed Systems, Department of Analytic Computing, e-mail

Press contact:

Andrea Mayer-Grenu
 

Andrea Mayer-Grenu

Scientific Consultant, Research Publications

To the top of the page