Recent advances in deep learning have touched the field of medical science. However, recent privacy concerns and regulatory frameworks have hampered the sharing and collection of medical data. Such legal restrictions limit the potential for future advances in deep learning, which is a particularly data-intensive technique and partnership. However, creating synthetic data that is accurate for medical purposes can alleviate privacy concerns and improve deep learning pipelines. This article introduces generative adversarial neural networks that can create accurate images from radiographs of knee joints with varying degrees of osteoarthritis. Researchers provide 5,556 real photos along with 320,000 artificial (DeepFake) X-ray images for training.
With the help of 15 medical professionals, they evaluated our models for medical accuracy and examined the effects of augmentation on a task that classified osteoarthritis severity. For medical professionals, they created a survey with 30 real and 30 deepfake photos. As a result, deepfakes were often mistaken for real more than the opposite. The result showed that the DeepFake realism was enough to fool medical professionals. Finally, by using limited real-world data and transfer learning, our DeepFakes increased classification accuracy in the challenge of classifying osteoarthritis severity. Furthermore, they replaced all real training data with DeepFakes in the same classification job, and the accuracy of categorizing real osteoarthritis radiographs suffered only a 3.79% loss from baseline.
Early detection can slow the clinical course and potentially improve the patient’s mobility and quality of life. Both medical professionals and artificial neural networks face significant difficulties in early detection. Using two adversarial generative neural networks, they were able to create an infinite number of knee osteoarthritis radiographs at different Kellgren and Lawrence stages for this study. The researchers first demonstrated anonymity and extension effects in deep learning, and then the researchers validated their system with 15 medical professionals. The generated DeepFake X-ray images can be made freely available to researchers and the public.
The photos for KL01 WGAN and KL234 WGAN ranged from early training to the best selected models.
Neural networks for KL01 WGAN and KL234 WGAN were trained on X-ray images of human anatomy. As the training progressed, they found that significant structural changes diminished while textural modifications improved. Upsampling and 2D convolution modules with exponential unit activations and batch normalization were the main building blocks used to build the generator block. The dropout layers to avoid overfitting made the discriminator block a unique analysis of 30 authentic and 30 fake DeepFake photos from classes KL01 and KL234. The OA degree has been evaluated by experts for both real and artificial images. The results showed that more fake than real photos were confused with each other. Between KL01 and KL234, OA severities were predicted using the binary classification task.
With the DeepFake extension set, researchers found that losses were reduced, thereby increasing validation accuracy. The augmentation effect with the highest test result, +200% fakes, was the most effective. Overall, both reinforcement and anonymization effects suggested the possibility of beneficial downstream consequences in the classification of knee osteoarthritis. Deep neural networks may be able to create medically accurate X-ray images of knee osteoarthritis. The associated amplification effects and the anonymity through substitution were recorded for the first time in this study.
To increase classification accuracy in transfer learning with limited data, DeepFake images were added to the actual training data. Such transfer learning strategies are often used in the medical field, where data is often scarce and difficult to collect. To prevent GPU memory overflow, an image size of 210×210 was used. To increase the number of photos available for two models of osteoarthritis severity, they combined KL classes (KL01 and KL234). Early KL grades showed less label noise as a result of combining KL grades.
Focus filtering was used to prevent focused and unfocused textures from being combined into one image, as large gaps in x-ray focus and texture clarity would confuse the generator. Experts needed help to distinguish deepfake images from real photos. The significant standard deviations observed in the KL rating agreement task also reflect the presence of this effect. The medics’ assessments were skewed, as some photos showed better clinical characteristics than others. Landmark production and recognition may benefit from further integration of landmark tags.
The 4130 radiographs, covering both knee joints, were used to create the images, which were then evaluated using the Kellgren and Lawrence system. There were 3253 photos for 0th grade, 1495 for 1st grade, 2175 for 2nd grade, 1086 for 3rd grade and 251 for 4th grade in KL. The aim of the study was to examine how realistic DeepFake photos are. They randomly generated 15 KL01 and 15 KL234 photos and then asked medical professionals to rate them based on their KL scores.
The images were resized to 315,315 pixels and included in the survey in random order. They used the balanced accuracy metric79 to deal with unbalanced responses. The study team used a straightforward variant of the ImageNet pre-trained VGG1664 architecture, which was further trained for 22 epochs, with only the last three blocks of the design being trainable and the rest being frozen. To generate each dataset, they started with actual data and gradually added more deepfake data. Real photos were randomly selected using the “Random” package of the Python language.
Try this paper and record. All credit for this research goes to the researchers on this project. Also don’t forget to participate our Reddit page and Discord Channelwhere we share the latest AI research news, cool AI projects and more.
Prezja, F., Paloneva, J., Pölönen, I. et al. DeepFake knee osteoarthritis X-ray images from generative adversarial neural networks deceive medical experts and offer potential for automatic classification. Scientific Rep 12, 18573 (2022). https://doi.org/10.1038/s41598-022-23081-4
Ashish Kumar is a consulting intern at MarktechPost. He is currently pursuing his Btech studies at Indian Institute of Technology (IIT), Kanpur. He is passionate about exploring new technological advances and their application in real life.