Graphical representation of a neural network.

How do Algorithms decide? Peering into the Black Box

forschung leben – the magazine of the University of Stuttgart (Issue March 2021)

AI algorithms are increasingly taking decisions that have a direct impact on humans. But greater transparency into how such decisions are reached is required.
[Photo: Universität Stuttgart/IFF]

As an employer, Amazon is much in demand and the company receives a flood of applications. Little wonder, therefore that they are seeking ways to automate the pre-selection process, which is why the company developed an algorithm to filter out the most promising applications. This AI algorithm was trained using employee data sets to enable it to learn who would be a good fit for the company. However, the algorithm systematically disadvantaged women. Because more men had been recruited in the past, far more of the training data sets related to men than women, as a result of which the algorithm identified gender as a knockout criterion. Amazon finally abandoned the system when it was found that this bias could not be reliably ruled out despite adjustments to the algorithm.

This example shows how quickly someone could be placed at a disadvantage in a world of algorithms, without ever knowing why, and often without even knowing it. “Should this happen with automated music recommendations or machine translation, it may not be critical,” says Marco Huber, “yet it is a completely different matter when it comes to legally and medically relevant issues or in safety-critical industrial applications.”

Marco Huber
Accoring to Prof. Marco Huber, the comprehensible solution is sometimes more important than the optimal solution.

Huber is a Professor of Cognitive Production Systems at the University of Stuttgart’s Institute of Industrial Manufacturing and Management (IFF) and also heads the Center for Cyber Cognitive Intelligence (CCI) at the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA).

Those AI algorithms that achieve a high prediction quality are often the ones whose decision-making processes are particularly opaque. “Neural networks are the best-known example,” says Huber: “They are essentially black boxes because it is not possible to retrace the data, parameters, and computational steps involved.” Fortunately, there are also AI processes whose decisions are traceable and Huber's team is now trying to shed light on neuronal networks with their aid. The idea is to make the black box transparent (or “white”).

Making the box white through simple yes-no questions

One approach involves decision tree algorithms, which present a series of structured yesno (binary) questions. These are even familiar from school: whoever has been asked to graph all possible combinations of heads and tails when flipping a coin multiple times will have drawn a decision tree. Of course, the decision trees Huber's team uses are more complex.

Graphic of a decision tree
This decision tree shows the decision making process of the neural network. It's all about classification: bump or scratch? The yellow nodes represent a decision in favor of a bump whilst the green ones correspond to a decision in favor of a scratch.

“Neural networks need to be trained with data before they can even come up with reasonable solutions,” he explains, whereby “solution” means that the network makes meaningful predictions. The training represents an optimization problem to different solutions are possible, which in addition to the input data, also depend on boundary conditions, which is where decision trees come in. “We apply a mathematical constraint to the training to ensure that the smallest possible decision tree can be extracted from the neural network,” Huber explains. And because the decision tree renders the forecasts comprehensible, the network (black box) is rendered “white”. “We nudge it to adopt a specific solution from among the many potential solutions,” says the computer scientist: “probably not the optimal solution, but one that we can retrace and understand.”

Neural networks
The neural networks used in Artificial Intelligence (AI) algorithms are modeled on the human brain, whereby the artificial neurons can receive and process information and communicate with one another. 

The counterfactual explanation

There are other ways of making neural network decisions comprehensible. “One way that is easier for lay people to understand than a decision tree in terms of its explicatory power,” Huber explains, “is the counterfactual explanation.” For example: when a bank rejects a loan request based on an algorithm, the applicant could ask what would have to change in the application data for the loan to be approved. It would then quickly become apparent whether someone was being disadvantaged systematically or whether it was really not possible based on their credit rating.

Many youngsters in Britain might have wished for a counterfactual explanation of that kind this year. Final exams were cancelled due to the Covid-19 pandemic, after which the Ministry of Education then decided to use an algorithm to generate final grades. The result was that some students were given grades that were well below what they expected to receive, which resulted in an outcry throughout the country. The algorithm took account of two main aspects: an assessment of individual’s general performance and exam results at the respective school from previous years. As such, the algorithm reinforced existing inequalities: a gifted student automatically fared worse in an at-risk school than in a prestigious school.

Graphical representation of a neural network.
The neural network: the white dots in the left column represent the input data whilst the single white dot on the right represents the output result. What happens in between remains mostly obscure.

Identifying risks and side effects

In Sarah Oppold’s opinion, this is an example of an algorithm implemented in an inadequate manner. “The input data was unsuitable and the problem to be solved was poorly formulated,” says the computer scientist, who is currently completing her doctoral studies at the University of Stuttgart’s Institute of Parallel and Distributed Systems (IPVS), where she is researching how best to design AI algorithms in a transparent manner. “Whilst many research groups are primarily focusing on the model underlying the algorithm,” Oppold explains, “we are attempting to cover the entire chain, from the collection and pre-processing of the data through the development and parameterization of the AI method to the visualization of the results.” Thus, the objective in this case is not to produce a white box for individual AI applications, but rather to represent the entire life cycle of the algorithm in a transparent and traceable manner.

The result is a kind of regulatory framework. In the same way that a digital image contains metadata, such as exposure time, camera type and location, the framework would insert explanatory notes to an algorithm – for example, that the training data refers to Germany and that the results, therefore, are not transferable to other countries. “You could think of it like a drug,” says Oppold: “It has a specific medical application and a specific dosage, but there are also associated risks and side effects. Based on that information, the health care provider will decide which patients the drug is appropriate for.”

The framework has not yet been developed to the point where it can perform comparable tasks for an algorithm. “It currently only takes tabular data into account,” Oppold explains: “We now want to expand it to take in imaging and streaming data.” A practical framework would also need to incorporate interdisciplinary expertise, for example from AI developers, the social sciences and lawyers. “As soon as the framework reaches a certain level of maturity,” the computer scientist explains, “it would make sense to collaborate with the industrial sector to develop it further and make the algorithms used in industry more transparent .”

Text: Michael Vogel

Can AI discriminate? This is the subject of the research focus IRIS at the University of Stuttgart

Contact

 

University Communications

Keplerstraße 7, 70174 Stuttgart

To the top of the page