Imaging processes play a significant role in nuclear medicine both for diagnostic purposes and treatment. However, every examination method has its weaknesses, some of which can only be compensated for by combining two procedures. Scientists at the Universities of Stuttgart and Tübingen want to eradicate this problem with new machine learning methods. The initial results are encouraging.
It is often not good news if a patient needs to be examined using a positron-emission tomography (PET) scan. According to the German Society for Nuclear Medicine (DGN), a tumor, suspected dementia and epilepsy are typical indications for which this expensive examination method is considered appropriate. For the layperson, this device does not seem much different from the more familiar computed tomography (CT) or magnetic resonance imaging (MRI). The results of a PET scan also seem the same: two-dimensional sectional images of a particular region of the body. But experts, such as radiographers, are well aware of the differences between these procedures. CT and MRI produce images in which anatomic structures, such as bones, tissues and organs, can be identified. A PET scan, on the other hand, show metabolic processes, i.e., flows, at the molecular level. With specific reference to a tumor, one could say that CT and MRI images show where and how big the tumor is. The PET images, on the other hand, primarily provide insights into its activity, so that one can see how aggressive it is.
“However, it isn't the case that PET images contain no information at all about the anatomic structure”, says Karim Armanious, a doctoral student at the University of Stuttgart's Institute of Signal Processing and System Theory (ISS), which is headed up by Professor Bin Yang. “But this data is so insufficient that a PET scanner often has to be operated in combination with a CT or MRI scanner” – so that each of the imaging processes can contribute its respective strengths. Currently, this is everyday medical practice, and means that the examination of a patient in such a tandem device takes a very long time, because the radiologist has to take double the amount of pictures. The consequences are additional stress for the patient but also fewer examinations per day for the operator and therefore a reduction in the device's economic efficiency. Moreover, the patient is exposed to an additional radiation load through the CT scans. “That’s why radiologist want PET images that don’t require any additional images and still provide sufficient anatomical information”, Armanious explains. He and his ISS colleagues are working towards this goal in collaboration with radiologists from the University Hospital of Tübingen (UKT).
New stars of the machine learning process
To achieve this, they are using machine learning processes, the so-called Generative Adversarial Networks (GANs). “This is a new process, first introduced just four years ago, which is currently really popular among researchers”, says Armanious, who earned his master's degree in Information Technology with a special focus on Communication Engineering and Media Technology. The GANs principle can be explained by way of an analogy: an art forger wants to paint the Mona Lisa so well that it is not possible to tell the difference between his painting and the original. An art expert compares the forged picture with the original, not knowing which is which. The forger will be told whether or not the expert has been able to identify the original, but not how he was able to spot the fake. So, with each new attempt, he changes the style, colors, perspective and appearance of the sitter and again presents the results to the expert, who gives every picture the thumbs down as long as he can tell it apart from the original.
A human art forger would probably lose heart at some point and try to come up with some other scam. However, in Armanious‘ experiments, forger and expert are both GANs, algorithms running on a computer; they have no concept of disappointment and never get tired. Nor is it about the Mona Lisa, but about CT images. “After 36 hours of computer time on a high-end graphics card, the training of our two GANs had advanced to the point that the synthetic CT images were barely discernible from the real ones”, he says. “Our quantitative tests on the computer then produced a concordance score of over 90 per cent”. But that's not all: Armanious and his team showed the synthetic and real CT images to six doctors, who have to assess CT data routinely as part of their daily practice. They were asked to rate the quality of the images on a scale of one (low quality) to four (high quality), without knowing which of the images were based on real data and which had been general synthetically. “The doctors gave the real images an average score of 3.3 and the synthetic ones a 3.0)”, says the scientist: a pretty convincing result!
Artificial intelligence is on the increase
“Until now”, Armanious adds, “the research community has only used GANs that were originally designed for other applications to tackle medical issues. We’re the first to have developed a GAN right from scratch specifically for medical purposes”. One of the results is significantly shorter processing times. Creating synthetic CT images is just the first step for the researchers. Now, work is starting on the data to correct PET images such as to render CT images in tandem devices superfluous. “To do this”, Armanious continues, “we compare the traditional imaging method with the new approach using PET imaging data from Tübingen”. This involves one GAN attempting to reconstruct anatomical information from the pure PET data to render the CT data superfluous, whilst the other GAN compares the images generated in this way with those images based – as they have been to date – on combined PET and CT data. Then the competition between the original and the “forgery” enters the next round.