Automated Detection of Alzheimer’s Disease: A Multi-Modal Approach with 3D MRI and Amyloid PET

, , , ,
DOI 10.1038/s41598-024-56001-9
Abstract

Recent advances in deep learning and imaging technologies have revolutionized automated medical image analysis, especially in diagnosing Alzheimer’s disease through neuroimaging. Despite the availability of various imaging modalities for the same patient, the development of multi-modal models leveraging these modalities remains underexplored. This paper addresses this gap by proposing and evaluating classification models using 2D and 3D MRI images and amyloid PET scans in uni-modal and multi-modal frameworks. Our findings demonstrate that models using volumetric data learn more effective representations than those using only 2D images. Furthermore, integrating multiple modalities enhances model performance over single-modality approaches significantly. We achieved state-of-the-art performance on the OASIS-3 cohort. Additionally, explainability analyses with Grad-CAM indicate that our model focuses on crucial AD-related regions for its predictions, underscoring its potential to aid in understanding the disease’s causes.

Springer Nature

Related Blog Posts

The Future of Alzheimer’s Diagnosis: Unlocking Insights with Multi-modal Imaging Models

March 3, 2024

Dementia affects ~55 million individuals worldwide, with Alzheimer’s disease (AD) being the predominant type. Artificial Intelligence (AI) may aid in its diagnosis. We propose and evaluate multi-modal models for the task, incorporating eXplainable Artificial Intelligence for diagnostic transparency.