Self-optimization in the production hall

forschung leben – the Magazine of the University of Stuttgart

Machines are training themselves in cognitive production systems in an automated and rapid manner

Machines that virtually program themselves will make production faster, more flexible, more forward looking and more economical in the future. Such a “learning factory” is still a vision of the future and the challenges are great. Scientists at the University of Stuttgart and the ARENA2036 research factory are working on the realization of the project.

Fishing out a screw from a tool box and feeding it into the production system seems to be quite a simple task at first glance. But behind the process lies a multitude of partial steps: to recognize the screw among the jumble of parts, to grip it in the right place, to control the workpiece, to turn the screw in the right place and in the right direction - humans intuitively grasp such processes. A traditional production robot, on the other hand, must be programmed by an expert for each individual step. “It takes a long time for the robot to function reliably, and the personnel-intensive programming is expensive,” explains Prof. Marco Huber, head of the Cognitive Production Systems department at the University of Stuttgart’s Institute of Industrial Manufacturing and Management (IFF).

In a cognitive production system, such programming takes place without human intervention. The robot learns autonomously, for example in a simulated environment or by imitation, which steps it has to carry out. From this it generates its own program and finally executes it. The idea of machines that can actually program themselves is still a dream of the future, “but in some areas, such as object recognition, this works very well, at least under laboratory conditions,” explains Huber. “We now want to put this into practice,” adds the scientist, who also heads the Center for Cyber Cognitive Intelligence (CCI) at the Fraunhofer IPA, which provides support for medium- sized companies in particular, with machine learning for production issues.

Company founder Dr. Roman Bek
Company founder Dr. Roman Bek

Greater complexity through Deep Learning

Deep Learning, a sub-field of machine learning based on neural networks is a central factor for these developments. The concept goes back to its beginnings in the 1940s and is based on the functioning of the neurons in the human brain, which record information via synapses and “fire” when a critical value is reached. The American psychologist and computer scientist Frank Rosenblatt derived the so-called perceptron (a word derived from “perception”) from this process.

Perceptrons convert an input vector into an output vector and thus represent a simple memory whose contents can be accessed associatively. Neural networks arrange a large number of perceptrons in layers, thus enabling the representation of non-linear mathematical functions.

“The more layers such a network has, the more complex the problems it is able to solve,” explains Huber. At least in theory. Because, for a long time, this thing that is actually essential, the training of neuronal nets, has presented a major challenge. The training relies on adjusting the weightings for the connections between the neurons - an enormous effort with several 100,000 or even millions of potential weightings. The first solution emerged in the 1980s with a core algorithm that made automated weighting possible. The name “backpropagation” of the algorithm suggests the basic principle: deviations between the prediction made by the system and the actual data set are fed back into the net “from back to front”, which enables the system to adapt gradually until the errors are minimized. Mathematically, this is the well-known principle of gradient descent, a method used in numerics to solve general optimization problems.

Driver of the breakthrough

The actual breakthrough for Deep Learning came with three developments in recent years: “By using graphics cards, we now have computing power that allows us to speed up the training effort. Secondly, the Internet provides us with a mass of data, and thirdly, there are high-quality software packages, some of which are inexpensive or even open source,” says Huber, describing the drivers of progress.

Data is the key to machine learning and the well-known principle of 'garbage in, garbage out' applies here.

Prof. Marco Huber

Nevertheless, the remaining challenges are considerable. One is called Sim-to-Real Transfer and describes the transfer from a computer simulation, for example, to a real robot. The problem with this is that there is no perfect simulation - so there can be no perfect training. Another difficulty is that machine learning requires a lot of high-quality data. In fact, there are also masses of data generated during production, but its quality often leaves much to be desired. Sensor data is noisy, sometimes complete data fields are missing or the data has different time stamps because it is generated by machines whose clocks tick slightly differently. “Data is the key to machine learning,” says Huber, “and the well-known principle of 'garbage in, garbage out' applies here.”

Profilepicture Prof. Marco Huber
Prof. Marco Huber leads the division of Cognitive Production Systems at the Institute of Industrial Manufacturing and Management (IFF) at the University of Stuttgart

Until a machine can make decisions as complex as a human being's, more research will need to be carried out into learning methods. One promising approach to planning actions involves so-called reinforcement learning. A robot, for example, learns its job like a child through trial and error and receives a reward if it is successful: if it places its workpiece in the right position, it gains a point, but gets minus points for droppings it - which continues until it has mastered its job.

Another research topic is “transfer learning”, i.e., transferring something that has already been learned to another task. Whilst humans can draw on their wealth of experience when dealing with similar problems, until now it has been necessary to start from the beginning every time a machine is programmed. Closely related to this is “meta- learning”, i.e. learning how to learn. At the moment, machine learning still involves a lot of trial and error. How many layers does a neural network need, what does each layer do, how many neurons does each layer need? “It would be good to have an algorithm that could determine the appropriate architecture for the neural network by itself,” hopes Huber.

Bringing light into the black box

Last but not least, neural networks currently resemble a kind of black box. What happens in it and why a neural network makes a certain decision remains in the dark. This becomes or will become critical, for example, in the clarification of questions of guilt in accidents involving autonomous driving, in the financial sector when a customer wants to know why he or she has been turned down for a loan, or in medicine when an operating robot takes the decision as to whether or not to remove a tumor. According to the General Data Protection Regulation, humans have a right to an explanation of any automated decision. The black box can, therefore, prevent the application of something that is technically feasible in borderline cases.

Scientists are therefore trying to extract decision rules for neural networks and turn the black box into a white box. Unfortunately, that is an endless task, says Huber. “With complex systems, more and more rules are added over time, some of which are even contradictory. Then, at some point, the control system itself becomes a black box.” Nevertheless, the triumph of Deep Learning is unstoppable. The range of possible applications today extends from all phases of production through autonomous driving to chatbots such as Alexa and Siri as well as DeepL, the online translation engine. 

Quality assurance is a classic aspect in production where artificial intelligence is already being used intensively, especially during final inspection. In the past, and sometimes still today, a workpiece would be inspected by a human being, for example, to check that it had a perfect surface finish. The results are subjective and the task tiresome. Today such inspections are usually camera-based, but the image processing function has to be parameterized very elaborately. By combining the camera images with neural networks, the system can detect errors much more reliably and the risk of false alarms is reduced. In addition, the systems are becoming more robust, so that fluctuations in lighting, for example, have less of an effect.

Artificial intelligence can also be used to solve problems of predictive maintenance, a core component of industry 4.0. Condition data from machines is used to predict when a part will fail and to plan the optimum maintenance time. In this context, neural networks can, for example, improve the forecast models for determining the remaining running time of a machine.

Agile production facilities

What is already possible in individual production areas will soon also apply to complete production lines. Dr. Matthias Reichenbach, head of the FutureTec team at Daimler AG, is working on this project at the University of Stuttgart’s ARENA2036 research factory. The construction of such systems currently takes several months from planning to programming to commissioning. One only knows whether the process works in the end and what quality the products will have when the system is in operation.

To make things faster and more agile in the future, the group is focusing on simplifying its systems, on intuition - and on its production staff. “The people at the assembly line know the production processes best and have the 'screwing skills'. We want to make their work easier, inspire them emotionally and unleash their creativity,” says Reichenbach, describing the objective. “Man should enjoy technology and use it for more than its original purpose.”

A very practical example are small robot assistants that can be easily programmed by anyone without IT knowledge. The employee simply shows the robot what to do. This then imitates the motion sequence - man and machine can work hand in hand. Technology supports people in their daily work in the way that is best for them individually.

This is made possible by torque sensors originally developed at the German Aerospace Center (DLR) for the International Space Station (ISS). “The robot feels what it's doing,” explains Reichenbach. After just ten minutes, the sequence is already in place and the robot can, for example, screw in screws or install a battery pack and check that it is in the correct position.

Dr. Matthias Reichenbach is programming a robot with torque sensor technology
Dr. Matthias Reichenbach's area of research are robot assistants that can be programmed to perform routine tasks easily and without IT knowledge.

We need completely different levels of stability and process reliability for continuous operations. We will be working on this together with our partners at ARENA2036 or even in a real factory.

Dr. Matthias Reichenbach

As part of the “FluPro” (Fluid Production) project, in which a total of 18 partners are involved under the leadership of the Fraunhofer Institute for Manufacturing Engineering and Automation IPA, the easy-to-reconfigure and easy-to-program production modules are now to be linked step by step to form a complete assembly line for electromobility. In ARENA2036, the researchers are relying on the principle of swarm intelligence: rather than planning the system as a whole, each expert optimizes his or her specific area, which is then integrated into the overall system. The prototype should be ready by the end of 2019. However, it will be some time before a production system can be programmed in real life in just two days. “We need completely different levels of stability and process reliability for continuous operations. We will be working on this together with our partners at ARENA2036 or even in a real factory,” says Reichenbach confidently.

Further potential for AI

But self-training systems are not only of interest in production environments. Predictive maintenance, for example, is also an issue in the construction industry, since there is a lot of integrated technology in industrial buildings and a lot of money can be saved through predictive maintenance.

There is also enormous potential for artificial intelligence in the life sciences. Research into new drugs, for example, currently takes between seven and eight years on average, as countless databases have to be searched manually to find an active substance. In this context, artificial intelligence could automate the search process and make it more accurate, which could significantly reduce development times as well as the financial risks.

Biological transformation is the buzzword of the future. It refers to the convergence of technical and biological processes, which is intended to bring about greater sustainability in a wide variety of areas such as production or the housing sector. As with the digital transformation, information technologies such as AI play an essential role in achieving a high degree of automation or even autonomy. A Bio-Intelligence Competence Centre is currently being established with the participation of the Stuttgart Fraunhofer Institutes and the Universities of Stuttgart and Hohenheim. There, the development and use of bio-intelligent systems in different fields of application will be investigated.

Andrea Mayer-Grenu

Prof. Dr. Marco Huber
Institute of Industrial Manufacturing and Management IFF, University of Stuttgart

Expert-Profile

Contact

 

University Communications

Keplerstraße 7, 70174 Stuttgart

To the top of the page