
Artificial intelligence (AI) holds enormous potential to improve medical imaging, with AI approaches often showing comparable or superior performance to medical expert review. However, a presentation at the Society of Nuclear Medicine and Molecular Imaging (SNMMI) Annual Meeting outlined important vulnerabilities in AI models, which developers and users must take into consideration. In particular, the presenters highlighted the threats of data attacks and data manipulation.
The authors conducted a review of threats and mitigation strategies to highlight the importance of data security in medical imaging AI efforts. Among their concerns are generative adversarial networks (GANs): “unsupervised neural networks, which compete to generate new examples from a given training sample.”
“In the case of imaging, the generated images are indistinguishable from the initial example images visually. GANs have been used to create deep-fake photos and videos. Whether inadvertently or maliciously, processed images could result in data manipulation,” wrote the authors, led by Sriram S. Paravastu, of the National Institutes of Health.