News

Keynote by Dr. Pamela Douglas

TITLE: On the Stability of Deep Learning Representations for Neuroimaging

The increasing use of deep neural networks (DNNs) has motivated a parallel endeavor: the design of adversaries that profit from successful misclassifications.  However, not all adversarial examples are crafted for malicious purposes.  For example, real-world systems often contain physical, temporal, and sampling variability across instrumentation and across data collection sites. Adversarial examples in the wild may inadvertently prove deleterious for accurate predictive modeling. Conversely, naturally occurring covariance of image features may serve didactic purposes. In this talk, we examine the stability of deep learning representations for neuroimaging classification across didactic and adversarial conditions characteristic of MRI acquisition variability. We show that representational similarity and performance vary according to the frequency of adversarial examples in the input space.

Keynote by Dr. Yong Fan

Title: Deep learning of MRI brain images for early prediction of Alzheimer’s disease dementia and cognitive decline

Early prediction of Alzheimer’s disease (AD) and cognitive decline could promote timely interventions to slow or halt dementia progression. Although clinical criteria for mild cognitive impairment (MCI) and AD have been developed to formalize assessment of the gradual progression of the disease, it remains challenging to predict when and which individuals who meet criteria for MCI will ultimately progress to AD dementia. It is even more difficult to predict when a cognitively normal person will have cognitive decline. In this talk, we will present our recent neuroimaging studies for prediction of individual MCI subjects’ progression to AD dementia and cognitive decline of cognitively normal people based on their structural magnetic resonance imaging (MRI) data. Particularly, we have developed deep learning frameworks to extract informative features from MRI data and build prognostic models on the extracted features to predict AD dementia and cognitive decline in a time-to-event analysis setting. We have evaluated the proposed methods using baseline structural MRI data of subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and the Australian Imaging Biomarkers and Lifestyle Study of Aging (AIBL). We also compared the deep learning-based imaging features with conventionally hand-crafted imaging features. Extensive evaluation experiments have demonstrated that the deep learning prediction models could achieve promising performance for early predicting rapidity of dementia progression and cognitive decline of individual older adults based on their baseline MRI images and demographic and cognitive measures.

Accepted Papers

Relevance Vector Machines for harmonization of MRI brain volumes using image descriptors

Authors: Maria Ines Meyer (Technical University of Denmark); Ezequiel de la Rosa (Technical University of Munich); Koen Van Leemput (Technical University of Denmark); Diana Sima (icometrix, Leuven, Belgium)

Abstract: With the increased need for multi-center magnetic resonance imaging studies, problems arise related to differences in hardware and software between centers. Namely, current algorithms for brain volume quantification are unreliable for the longitudinal assessment of volume changes in this type of setting. Currently, most methods attempt to decrease this issue by regressing the scanner- and/or center-effects from the original data. In this work, we explore a novel approach to harmonize brain volume measurements by using only image descriptors. First, we explore the relationships between volumes and image descriptors. Then, we train a Relevance Vector Machine (RVM) model over a large multi-site dataset of healthy subjects to perform volume harmonization. Finally, we validate the method over two different datasets: i) a subset of unseen healthy controls, and ii) a test-retest dataset of multiple sclerosis (MS) patients. The method decreases scanner and center variability while preserving measurements that did not require correction in MS patient data. We show that image descriptors can be used as input to a machine learning algorithm to improve the reliability of longitudinal volumetric studies.


Automated Quantification of Enlarged Perivascular Spaces in Clinical Brain MRI across Sites

Authors: Florian Dubost (Erasmus MC); Max Duennwald (Otto von Guericke University Magdeburg); Denver Huff (MedDigit, OVGU, Magdeburg); Vincent Scheumann (Dept. of Neurology, Otto-v.-Guericke Univ. Magdeburg); Frank Schreiber (Dept. of Neurology, Otto-v.-Guericke Univ. Magdeburg); Wiro Niessen (Erasmus); Meike Vernooij (Erasmus MC); Martin Skalej ( Dept. of Neurology, Otto-v.-Guericke Univ. Magdeburg)*; Stefanie Schreiber (Dept. of Neurology, Otto-v.-Guericke Univ. Magdeburg); Steffen Oeltze-Jafra (“Dept. of Neurology, Otto-v.-Guericke Univ. Magdeburg”); Marleen de Bruijne (Erasmus MC Rotterdam / University of Copenhagen)

Abstract: Enlarged perivascular spaces are structural brain changes visible in MRI, and are a marker of cerebral small vessel disease. Most studies use time-consuming and subjective visual scoring to assess these structures. Recently, automated methods to quantify enlarged perivascular spaces have been proposed. Most of these methods have been evaluated only in high-resolution scans acquired in controlled research settings. We evaluate and compare two recently published automated methods for the quantification of enlarged perivascular spaces in 76 clinical scans acquired from 9 different scanners. Both methods are neural networks trained on high-resolution research scans and are applied without fine-tuning the networks’ parameters. By adapting the preprocessing of clinical scans, regions of interest similar to those computed from research scans can be processed. The first method estimates only the number of PVS, while the second method estimates simultaneously also a high-resolution attention map that can be used to detect and segment PVS. The Pearson correlations between visual and automated scores of enlarged perivascular spaces were higher with the second method. With this method, in the centrum semiovale, the correlation was similar to the inter-rater agreement and also similar to the performance in high-resolution research scans. Results were slightly lower than the inter-rater agreement for the hippocampi and noticeably lower in the basal ganglia. By computing attention maps, we show that the neural networks focus on the enlarged perivascular spaces. Assessing the burden of said structures in the centrum semiovale with the automated scores reached a satisfying performance, could be implemented in the clinic and, e.g., help predict the bleeding risk related to cerebral amyloid angiopathy.


Data Pooling and Sampling of Heterogeneous Image Data for White Matter Hyperintensity Segmentation

Authors: Annika Hänsch (Fraunhofer MEVIS)*; Bastian Cheng (Universitätsklinikum Hamburg-Eppendorf); Benedikt Frey (Universitätsklinikum Hamburg-Eppendorf); Carola Mayer (Universitätsklinikum Hamburg-Eppendorf); Marvin Petersen (Universitätsklinikum Hamburg-Eppendorf); Iris Lettow (Universitätsklinikum Hamburg-Eppendorf); Farhad Yazdan Shenas (Universitätsklinikum Hamburg-Eppendorf); Götz Thomalla (Universitätsklinikum Hamburg-Eppendorf); Jan Klein (Fraunhofer MEVIS, Germany); Horst Hahn (“Fraunhofer MEVIS, Germany”)

Abstract: White Matter Hyperintensities (WMH) are imaging biomarkers which indicate cerebral microangiopathy, a risk factor for stroke and vascular dementia. When training Deep Neural Networks (DNN) to segment WMH, data pooling may be used to increase the training dataset size. However, it is not yet fully understood how the pooling of heterogeneous data influences the segmentation performance. In this contribution, we investigate the impact of sampling ratios between different datasets with varying data quality and lesion volumes. We observe systematic changes in DNN performance and segmented lesion volume depending on the sampling ratio. If properly chosen, a single DNN can accurately segment and quantify both large and small lesions on different quality test data without loss of performance compared to a specialized DNN.


Deep Transfer Learning For Whole-Brain fMRI Analyses

Authors: Armin W Thomas (Technische Universität Berlin); Klaus-Robert Müller (Technische Universität Berlin); Wojciech Samek (Fraunhofer HHI)*

Abstract: The application of deep learning (DL) models to the decoding of cognitive states from whole-brain functional Magnetic Resonance Imaging (fMRI) data is often hindered by the small sample size and high dimensionality of these datasets. Especially, in clinical settings, where patient data are scarce. In this work, we demonstrate that transfer learning represents a solution to this problem. Particularly, we show that a DL model, which has been previously trained on a large openly available fMRI dataset of the Human Connectome Project, outperforms a model variant with the same architecture, but which is trained from scratch, when both are applied to the data of a new, unrelated fMRI task. Even further, the pre-trained DL model variant is already able to correctly decode 67.51% of the cognitive states from a test dataset with 100 individuals, when fine-tuned on a dataset of the size of only three subjects.


A Hybrid 3DCNN and 3DC-LSTM based model for 4D Spatio-temporal fMRI data: An ABIDE Autism Classification study

Authors: Ahmed ElGazzar (AMC)*; Rajat Thomas (NiN); Guido Van Wingen (AMC); Leonardo Cerliani (AMC); Mirjam Quaak (AMC)

Abstract: Functional Magnetic Resonance Imaging (fMRI) is a primary modality for studying brain activity. fMRI scans comprise a spatial dimension represented in the 3D volume of the brain and a temporal dimension represented in the number of volumes acquired during the scan. Hence, to extract useful patterns from fMRI scans a model has to be able to capture spatiotemporal dependencies in the signal. However, the high-dimensionality of this data and small sample size poses serious modeling challenges and considerable computational constraints. For the sake of feasibility, standard models typically reduce dimensionality by modeling covariance among regions of interest (ROIs) or by ignoring/summarizing the spatial or temporal dimension. This could drastically reduce the ability to detect useful patterns in the scans. To overcome this problem we introduce an end to end algorithm capable of extracting spatiotemporal features from the full 4D data using 3D CNNs and 3D Convolutional LSTMs. We evaluate our proposed model on the publicly available ABIDE dataset to demonstrate the capability of our model to diagnose autism Spectrum Disorder (ASD) from resting-state fMRI data. Our results show that the proposed model achieves state of the art results in single sites with F1-scores of 0.78 and 0.7 on NYU and UM sites respectively.


Knowledge distillation for semi-supervised domain adaptation

Authors: Henry Mauricio M Orbes Arteaga (King’s college London)*; Jorge Cardoso (Kings College London); Lauge Sørensen (University of Copenhagen); Christian Igel (University of Copenhagen); Sebastien Ourselin (Kings College London); Marc Modat (Kings College London); Mads Nielsen (University of Copenhagen, Denmark); Akshay Pai (Cerebriu A/S, University of Copenhagen)

Abstract: In the absence of sufficient data variation (e.g., scanner and protocol variability) in annotated data, deep neural networks (DNNs) tend to overfit during training. As a result, their performance is significantly lower on data from unseen sources compared to the performance on data from the same source as the training data. Semi-supervised domain adaptation methods can alleviate this problem by tuning networks to new target domains without the need for annotated data from these domains. Adversarial domain adaptation (ADA) method is a popular choice that aims to train networks in such a way that the features generated are domain agnostic. However, these methods require careful dataset-specific selection of hyperparameters such as the complexity of the discriminator in order to achieve a reasonable performance. In this paper, we propose to use knowledge distillation (KD) — an efficient way of transferring knowledge between different DNNs — for semi-supervised domain adaption of DNNs without the need for dataset-specific hyperparameter tuning, making it generally applicable. The proposed KD-based method is compared to ADA for segmentation of white matter hyperintensities (WMH) in magnetic resonance imaging (MRI) scans generated by scanners that are not a part of the training set. Compared with both the baseline DNN (trained on source domain only and without any adaption to target domain) and with using ADA for semi-supervised domain adaptation, the proposed method achieves significantly higher WMH dice scores.