Several tens of thousands of photovoltaic modules jointly generate electricity in large solar parks. Until now, experts have had to examine photos of each individual solar cell in these modules.to determine whether any of them has been damaged by thunderstorms or aging. Researchers at the University of Stuttgart have now trained a computer program to analyze millions of images reliably and quickly.
The hailstorm has lashed down on the solar panel and entire areas on the photo of a particular photovoltaic module taken by the Solarzentrum Stuttgart (SZS) are black, which means these areas are no longer functioning, because current only flows where the electroluminescence (EL) images are light. The daylight EL (Daylight Luminescence System DaySy) method for the inspection of large solar parks, which is protected by a global patent, was developed by the SZS, a spin-off of the Institute for Photovoltaics (IPV) at the University of Stuttgart, in collaboration with the IPV itself. The scientists and the Insitute of Signal Processing and System Theory (ISS) are now developing a program that will use DaySy EL images of solar parks for automated analysis in a research project entitled “PARK” which is funded by the Federal Ministry of Economics and Technology. “Photovoltaics are making an important contribution to the transition to renewable energies,” says Alexander Bartler, a researcher at the ISS, whose master's thesis was about the self-learning artificial analysis eye. Today’s large solar parks are already generating one gigawatt and more. “However, this can decrease over time if defects occur in the modules.”
Appearances alone do not suffice
More than 86 million images of individual solar cells have to be examined in the DaySy EL process to examine a medium-sized solar park with an output of 300 megawatts for these defects. Put simply, the way the cell works is reversed: normally, radiation hits the silicon cell, which converts it into electrical energy. However, if electricity is sent through the cell instead, it emits infrared radiation - but only where it is intact. “By recording this with a special camera and a special measuring method, it is possible to see which structures within the semiconductor are defective and which are intact, which is not possible with the naked eye,” explains Bartler. “Until a year and a half ago, experts had to manually analyze each image,” the researcher goes on. Given the large number of pictures involved, this can be an expensive undertaking for the operators of solar parks.
According to Michael Reuter, Managing Director of the SZS, it is advisable not only to inspect all modules after delivery, installation and before the warranty expires, but also after every severe storm, which is what spawned the idea of automating the process: “We took cell images from the SZS that had already been evaluated and used them as training data for an artificial neural network,” explains Bartler. In the first stage, the program should simply distinguish between “intact” and “defective”. “This will save us an enormous amount of time.” For a module with 60 cells, the computer needs a maximum of one second. A human being needs about 60 to 120 seconds for this assessment with the same accuracy - but is unable to do this for several hours at a time, as the activity is strenuous and requires concentration. In the first step, the still static software then corrects the perspective of the photographs, which the SZS receives from global measuring campaigns. It then divides the image into individual images of each solar cell. Only in the next step does artificial intelligence take over the work previously done by the photovoltaic experts to examine the individual cells and, in the event of a defect, to classify them as such.
Accuracy training with 1.800.000 images
For their initial experiments, the team used an existing artificial neural network from the Visual Geometry Group (VGG) at the University of Oxford as the initial architecture, which they adapted and re-trained. The researchers recorded 98,000 images on which the SZS experts had marked the visible defects. “We know where the solar cell is located in which module and how the company has classified it. This classification task is now performed by the neuronal network,” explains Bartler. It only learns the rules of how to deduce the errors from the cell images. Even with the adapted VGG network, the team achieved an accuracy of 93 percent. “The bigger problem was that the data set was very unevenly distributed: In the data, about 95 percent of the solar cells were intact, but we want to find the other five percent - and we had few examples.”The researchers therefore had to adapt the training specifically for this purpose: They used the images of defective cells several times and slightly modified them in terms of their actual appearance in the training.
They are currently using other network architectures that correspond to the current state of the art. “We now need an even larger training dataset that we can use to create the final version,” says Bartler. This includes 1,800,000 cell images. Once it has trained with it, the intelligent artificial eye should be ready for use in the first half of 2019. At the same time, the scientists are working on refining the system further: if it detects different types of defects, it could also be used in future to predict exactly how severe the drop in performance of a module will be, Bartle reports. “If it is known that the performance drops by five percent per year with a remaining term of 15 years, one can calculate exactly whether or not it would be worthwhile to replace a given part.”