Machines with morals

forschung leben – the Magazine of the University of Stuttgart

Philosophers are developing ethical principles for a successful collaboration between man and machine

Warrior robots that could kill on their own volition - without doubt one of the horror scenarios of applied artificial intelligence (AI). But the question also arises for their use in smartphones, cars, the world of work and in building technology: How should AI applications be designed to benefit human beings and respect our right to self-determination rather than harming us? Scientists at the University of Stuttgart’s Institute of Philosophy (PHILO) are trying to provide answers to these questions.

Patients diagnosed with dementia are quite capable of living an independent life in the initial phase of the disease. The aim of assistance systems in the residential environment is to maintain independence for as long as possible in the further course of the disease. Together with external partners, researchers at the University of Stuttgart, have developed prototypes for such intelligent home technology in the “Design Adaptive Ambient Notification Environments” (DAAN) project.

In addition to engineers, the team also included philosophers, who designed a moral framework for DAAN, because technology should not patronize or dominate people, but should instead respect their personal preferences and independent day-to-day decisions. To do this, the system must learn to respond to the user. “Biography work and memory care techniques are already used by nursing professionals,” explains Hauke Behrendt, a researcher at the Institute of Philosophy. “But automating and digitizing this is new: self-learning systems that automatically adapt to the interests and needs of users.” The central task of the philosophers was to find a basis for deciding with what level of urgency DAAN reminds the patient of various activities. “If he or she forgets to water the flowers, it is less bad than if they forget to drink,” says Behrendt by way of example. The water glass on the table could vibrate as a reminder. Which follow-up actions would be taken, if this were also ignored, for example as of when DAAN should notify relatives, can be defined. “But how much autonomy is given up in this scenario? How much manipulation is in play? To take account of philosophical aspects to find answers to questions such as these is very beneficial for further developments,” says Behrendt.

“At the same time, certain values that are close to our hearts must be taken into account.” This raises questions of data ethics, such as who owns the data recorded by the assistance systems or who is allowed to use it. Not least of the concerns is the classic conflict between autonomy and well-being: “Should the system remind drinkers that they have not yet reached their alcohol limit?,” asks Behrendt. In any event, DAAN should only point out possibilities, but leave the decision to the user. “Provided that these possibilities rule out overtly self-damaging behavior.”

But how much autonomy is given up in this scenario? How much manipulation is in play? To take account of philosophical aspects to find answers to questions such as these is very beneficial for further developments.

Dr. Hauke Behrendt
Woman at a work surface at the Fraunhofer IAO
Tool of oppression and control or useful aid? The system projects instructions directly onto the work surface at the personalized Future Work Lab assembly workstation at the Fraunhofer IAO.

Ethical AI as a unique selling point

AI was also the focus of the interdisciplinary “motionEAP” project. Watched by a camera with motion detection, fitters assemble components, for example in workshops for the disabled. If they make a mistake or are unsure of how to proceed, the system projects the relevant information directly onto the work surface. This posed related questions to the developers: To what extent should the system be allowed to monitor the worker, and what level of intervention should be permitted? Who has access to the data? As was the case with DAAN, the philosophers involved in this project collaborated with the Institute for Visualization and Interactive Systems (VIS). “The German government takes the topic of digitization very seriously and allocates a great deal of research funding to this area,” says Behrendt: “however, always under the proviso that project Machines with morals Philosophers are developing ethical principles for a successful collaboration between man and machine partners take an active part in ethical reflections.” He is convinced that a unique selling proposition could emerge in this regard: “In countries such as China, ethical reflection is not currently as important as it is in Germany.”

The philosophers make use of so-called reflexive circles to arrive at their decisions, which involves deriving specific instructions for action from moral principles, which they then weigh up against intuition and other judgements, which in turn could lead them to modify the principles. To check whether their thought processes are comprehensible, consistent and conclusive, they exchange ideas with colleagues, users and affected persons from other disciplines and present their ideas for discussion in colloquia. In addition to applied ethics in connection with specific projects, they also consider fundamental questions relating to digitization. “Natural resources are running out, climate change is looming, the surplus value produced is being distributed more and more unequally: will digitization exacerbate these developments?” Behrendt wonders: “It is necessary to think about what form of economic activity, social participation and coexistence is needed to shape the digital transformation in a sustainable way.”

It is necessary to think about what form of economic activity, social participation and coexistence is needed to shape the digital transformation in a sustainable way.

Dr. Hauke Behrendt

Like the earlier industrial revolutions, digitization could lead to disaster for many people. Behrendt, on the other hand, stresses the opportunities: “If we reflect upon that and have the will to actively shape the process, then we could make huge gains in terms of knowledge and self-empowerment.” Rather than emphasizing the antagonism between man and machine, the philosopher argues in favor of uniting the best of both worlds: “Algorithms can already detect cancer much better than experienced chief physicians. However, armed with this technology, a chief physician is even more competent in his or her judgement. My hope would be that we would forego certain efficiency gains and equip people with the technology.” Then nobody would have to fear losing their jobs to a robot – or even being shot by one!

Daniel Völpel

Hauke Behrendt
Academic Staff Member at the Institute of Philosophy, University of Stuttgart
E-Mail

Contact

 

University Communications

Keplerstraße 7, 70174 Stuttgart

To the top of the page