News

MLCN 2022 Accepted Papers

Joint Reconstruction and Parcellation of Cortical Surfaces

Anne-Marie Rickmann (Ludwig Maximilians University Munich)*; Fabian Bongratz (Technical University of Munich); Sebastian Poelsterl (Ludwig Maximilian University); Ignacio Sarasua (AI Med); Christian Wachinger (Technical University of Munich)

Abstract: The reconstruction of cerebral cortex surfaces from brain MRI scans is instrumental for the analysis of brain morphology and the detection of cortical thinning in neurodegenerative diseases like Alzheimer’s disease (AD). Moreover, for a fine-grained analysis of atrophy patterns, the parcellation of the cortical surfaces into individual brain regions is required. For the former task, powerful deep learning approaches, which provide highly accurate brain surfaces of tissue boundaries from input MRI scans in seconds, have recently been proposed. However, these methods do not come with the ability to provide a parcellation of the reconstructed surfaces. Instead, separate brain-parcellation methods have been developed, which typically consider the cortical surfaces as given, often computed beforehand with FreeSurfer.
In this work, we propose two options, one based on a graph classification branch and another based on a novel generic 3D reconstruction loss, to augment template-deformation algorithms such that the surface meshes directly come with an atlas-based brain parcellation. By combining both options with two of the latest cortical surface reconstruction algorithms, we attain highly accurate parcellations with a Dice score of 90.2 (graph classification branch) and 90.4 (novel reconstruction loss) together with state-of-the-art surfaces.


Neuroimaging harmonization using cGANs: image similarity metrics poorly predict cross-protocol volumetric consistency

Veronica Ravano (Siemens Healthcare AG)*; Jean-François Démonet (Lausanne University Hospital and University of Lausanne); Daniel Damian (Lausanne University Hospital and University of Lausanne); Reto Meuli (Lausanne University Hospital (CHUV)); Gian Franco Piredda (Siemens); Till Huelnhagen (Siemens Healthcare AG); Benedicte Marechal (Siemens Healthineers); Jean-Philippe Thiran (École Polytechnique Fédérale de Lausanne); Tobias Kober (Siemens Healthcare AG); Jonas Richiardi (Lausanne University Hospital (CHUV))

Abstract: Computer-aided clinical decision support tools for radiology often suffer from poor generalizability in multi-centric frameworks due to data heterogeneity. In particular, magnetic resonance images depend on a large number of acquisition protocol parameters as well as hardware and software characteristics that might differ between or even within institutions. In this work, we use a supervised image-to-image harmonization framework based on a conditional generative adversarial network to reduce inter-site differences in T1-weighted images using different dementia protocols. We investigate the use of different hybrid losses including standard voxel-wise distances and a more recent perceptual similarity metric, and how they relate to image similarity metrics and volumetric consistency in brain segmentation. In a test cohort of 30 multiprotocol patients affected by dementia, we show that despite improvements in terms of image similarity, the synthetic images generated do not necessarily result in reduced inter-site volumetric differences, therefore highlighting the mismatch between harmonization performance and the impact on the robustness of post-processing applications. Hence, our results suggest that traditional image similarity metrics such as PSNR or SSIM may poorly reflect the performance of different harmonization techniques in terms of improving cross-domain consistency.


Volume is All You Need: Improving Multi-task Multiple Instance Learning for WMH Segmentation and Severity Estimation

Wooseok Jung (VUNO Inc.); Chong Hyun Suh (Asan Medical Center)*; Woo Hyun Shim (Asan Medical Center); Jinyoung Kim (VUNO Inc.); Dongsoo Lee (VUNO Inc.); Changhyun Park (VUNO Inc.); Seo Taek Kong (VUNO Inc.); Kyu-Hwan Jung (VUNO Inc.); Hwon Heo (Asan Medical Center ); Sang Joon Kim (Asan Medical Center)

Abstract: White matter hyperintensities (WMHs) are lesions with unusually high intensity detected in T2 fluid-attenuated inversion recovery (T2-FLAIR) MRI images, commonly attributed to vascular dementia (VaD) and chronic small vessel ischaemia. The Fazekas scale is a measure of WMH severity, widely used in radiology research. Although stand-alone WMH segmentation methods have been extensively investigated, a model encapsulating both WMH segmentation and Fazekas scale prediction has not. We propose a novel multi-task multiple instance learning (MTMIL) model for simultaneous WMH lesions segmentation and Fazekas scale estimation. The model is initially trained only for the segmentation task to overcome the difficulty of the manual annotation process. Afterward, volume-guided attention (VGA) obtained directly from instance-level segmentation results figure out key instances for the classification task. We trained the model with 558 in-house brain MRI data, where only 58 of them have WMH annotations. Our MTMIL method reinforced by segmentation results outperforms other multiple instance learning methods.


Augmenting Magnetic Resonance Imaging with Tabular Features for Enhanced and Interpretable Medial Temporal Lobe Atrophy Prediction

Dongsoo Lee (VUNO Inc.); Chong Hyun Suh (Asan Medical Center)*; Jinyoung Kim (VUNO Inc.); Wooseok Jung (VUNO Inc.); Changhyun Park (VUNO Inc.); Kyu-Hwan Jung (VUNO Inc.); Seo Taek Kong (VUNO Inc.); Woo Hyun Shim (Asan Medical Center); Hwon Heo (Asan Medical Center); Sang Joon Kim (Asan Medical Center)

Abstract: Medial temporal lobe atrophy (MTA) score is a key feature for Alzheimer’s disease (AD) diagnosis. Diagnosis of MTA from images acquired using magnetic resonance imaging (MRI) technology suffers from high inter- and intra-observer discrepancies. The recently-developed Vision Transformer (ViT) can be trained on MRI images to classify MTA scores, but is a “black-box” model whose internal working is unknown. Further, a fully-trained classifier is also susceptible to inconsistent predictions by nature of its labels used for training. Augmenting imaging data with tabular features could potentially rectify this issue, but ViTs are designed to process imaging data as its name suggests. This work aims to develop an accurate and explainable MTA classifier. We introduce a multi-modality training scheme to simultaneously handle tabular and image data. Our proposed method processes multi-modality data consisting of T1-weighted brain MRI and tabular data encompassing brain region volumes, cortical thickness, and radiomics features. Our method outperforms various baselines considered, and its attention map on input images and feature importance scores on tabular data explains its reasoning.


Automatic lesion analysis for increased efficiency in outcome prediction of traumatic brain injury

Margherita Rosnati (Imperial College London)*; Eyal Soreq (Imperial College London); Miguel Monteiro (Imperial College London); Lucia Li (Imperial College London); Neil S.N. Graham (Imperial College London); Karl Zimmerman (Imperial College London); Carlotta Rossi (Mario Negri Institute for Pharmacological Research IRCCS); Greta Carrara (Mario Negri Institute for Pharmacological Research IRCCS); Guido Bertolini (Mario Negri Institute for Pharmacological Research IRCCS); David Sharp (Imperial College London); Ben Glocker (Imperial College London)

Abstract: The accurate prognosis for traumatic brain injury (TBI) patients is difficult yet essential to inform therapy, patient management, and long-term after-care. Patient characteristics such as age, motor and pupil responsiveness, hypoxia and hypotension, and radiological findings on computed tomography (CT), have been identified as important variables for TBI outcome prediction. CT is the acute imaging modality of choice in clinical practice because of its acquisition speed and widespread availability. However, this modality is mainly used for qualitative and semi-quantitative assessment, such as the Marshall scoring system, which is prone to subjectivity and human errors. This work explores the predictive power of imaging biomarkers extracted from routinely-acquired hospital admission CT scans using a state-of-the-art, deep learning TBI lesion segmentation method. We use lesion volumes and corresponding lesion statistics as inputs for an extended TBI outcome prediction model. We compare the predictive power of our proposed features to the Marshall score, independently and when paired with classic TBI biomarkers. We find that automatically extracted quantitative CT features perform similarly or better than the Marshall score in predicting unfavorable TBI outcomes. Leveraging automatic atlas alignment, we also identify frontal extra-axial lesions as important indicators of poor outcomes. Our work may contribute to a better understanding of TBI, and provides new insights into how automated neuroimaging analysis can be used to improve the prognostication after TBI.


Accurate Hippocampus Segmentation Based on Self-Supervised Learning with Fewer Labeled Data

Kassymzhomart Kunanbayev (KAIST)*; Donggon Jang (KAIST); Woojin Jeong (KAIST); Nahyun Kim (KAIST); Daeshik Kim (KAIST)

Abstract: Brain MRI-based hippocampus segmentation is considered as an important biomedical method for prevention, early detection, and accurate diagnosis of neurodegenerative disorders like Alzheimer’s disease. The recent need for developing accurate as well as robust systems has led to breakthroughs making advantage of deep learning, but requiring significant amounts of labeled data, which, in turn, is costly and hardly obtainable. In this work, we try to address this issue by introducing self-supervised learning for hippocampus segmentation. We devise a new framework, based on the widely known method of Jigsaw puzzle reassembly, in which we first pre-train using one of the unlabeled MRI datasets, and then perform a downstream segmentation training with other labeled datasets. As a result, we found our method to capture local-level features for better learning of anatomical information pertaining to brain MRI images. Experiments with downstream segmentation training show considerable performance gains with self-supervised pre-training over supervised training when compared over multiple label fractions.


Concurrent ischemic lesion age estimation and segmentation of CT brain using a Transformer-based network

Adam Marcus (Imperial College London)*; Paul Bentley (Imperial College London); Daniel Rueckert (Imperial College London)

Abstract: The cornerstone of stroke care is expedient management that varies depending on the time since stroke onset. Consequently, clinical decision-making is centered on accurate knowledge of timing and often requires a radiologist to interpret Computed Tomography (CT) of the brain to confirm the occurrence and age of an event. These tasks are particularly challenging due to the subtle expression of acute ischemic lesions and their dynamic nature. Automation efforts have not yet applied deep learning to estimate lesion age and treated these two tasks independently, so, have overlooked their inherent complementary relationship. To leverage this, we propose a novel end-to-end multi-task transformer-based network optimized for concurrent segmentation and age estimation of cerebral ischemic lesions. By utilizing gated positional self-attention and modality-specific data augmentation, our method can capture long-range spatial dependencies while maintaining its ability to be trained from scratch under low-data regimes commonly found in medical imaging. Further, to better combine multiple predictions, we incorporate uncertainty by utilizing quantile loss to facilitate estimating a probability density function of lesion age. The effectiveness of our model is then extensively evaluated on a clinical dataset consisting of 776 CT images collected from two medical centers. Experimental results demonstrate that our method obtains promising performance, with an area under the curve (AUC) of 0.933 for classifying lesion ages <=4.5 hours compared to 0.858 using a conventional approach, and outperforms task-specific state-of-the-art algorithms.


Weakly Supervised Intracranial Hemorrhage Segmentation using Hierarchical Combination of Attention Maps from a Swin Transformer

Amirhossein Rasoulian (Concordia University)*; Soorena Salari (Concordia University); Yiming Xiao (Concordia Univeristy)

Abstract: Intracranial hemorrhage (ICH) is a potentially life-threatening emergency due to various causes. Rapid and accurate diagnosis of ICH is critical to deliver timely treatments and improve patients’ survival rates. Although deep learning techniques have become state-of-the-art in medical image processing and analysis, large training datasets with high-quality annotations that are expensive to acquire are often necessary for supervised learning. This is especially true for image segmentation tasks. To facilitate ICH treatment decisions and tackle this issue, we proposed a novel weakly supervised ICH segmentation method utilizing a hierarchical combination of self-attention maps obtained from a Swin transformer, which is trained through an ICH classification task with categorical labels. We developed and validated the proposed technique using two public clinical CT datasets (RSNA 2019 Brain CT hemorrhage & PhysioNet). As an exploratory study, we compared two different learning strategies (binary classification vs. full ICH subtyping) to investigate their impacts on self-attention and our weakly-supervised ICH segmentation method. As the first to perform ICH detection and weakly supervised segmentation with a Swin transformer, our algorithm achieved a Dice score of 0.407±0.225 for ICH segmentation while delivering high accuracy in ICH detection (AUC=0.974).


A Study of Demographic Bias in CNN-based Brain MR Segmentation

Stefanos Ioannou (King’s College London)*; Hana Chockler (King’s College London); Alexander Hammers (King’s College London); Andrew King (King’s College London)

Abstract: Convolutional neural networks (CNNs) are increasingly being used to automate the segmentation of brain structures in magnetic resonance (MR) images for research studies. In other applications, CNN models have been shown to exhibit bias against certain demographic groups when they are under-represented in the training sets. In this work, we investigate whether CNN models for brain MR segmentation have the potential to contain sex or race bias when trained with imbalanced training sets. We train multiple instances of the FastSurferCNN model using different levels of sex imbalance in white subjects. We evaluate the performance of these models separately for white male and white female test sets to assess sex bias and furthermore evaluate them on black male and black female test sets to assess potential racial bias. We find significant sex and race bias effects in segmentation model performance. The biases have a strong spatial component, with some brain regions exhibiting much stronger bias than others. Overall, our results suggest that race bias is more significant than sex bias.
Our study demonstrates the importance of considering race and sex balance when forming training sets for CNN-based brain MR segmentation, to avoid maintaining or even exacerbating existing health inequalities through biased research study findings.


Autism spectrum disorder classification based on interpersonal neural synchrony: Can classification be improved by dyadic neural biomarkers using unsupervised graph representation learning?

Christian Gerloff (RWTH Aachen University)*; Kerstin Konrad (RWTH Aachen University); Jana Kruppa (RWTH Aachen University); Martin Schulte-Rüther (University Medical Center G¨ottingen); Vanessa Reindl (Nanyang Technological University)

Abstract:
Research in machine learning for autism spectrum disorder (ASD) classification bears the promise to improve clinical diagnoses. However, recent studies in clinical imaging have shown the limited generalization of biomarkers across and beyond benchmark datasets. Despite increasing model complexity and sample size in neuroimaging, the classification performance of ASD remains far away from clinical application. This raises the question of how we can overcome these barriers to develop early biomarkers for ASD. One approach might be to rethink how we operationalize the theoretical basis of this disease in machine learning models. Here we introduced unsupervised graph representations that explicitly map the neural mechanisms of a core aspect of ASD, deficits in dyadic social interaction, as assessed by dual brain recordings, termed hyperscanning, and evaluated their predictive performance. The proposed method differs from existing approaches in that it is more suitable to capture social interaction deficits on a neural level and is applicable to young children and infants. First results from functional-near infrared spectroscopy data indicate potential predictive capacities of a task-agnostic, interpretable graph representation. This first effort to leverage interaction-related deficits on the neural level to classify ASD may stimulate new approaches and methods to enhance existing models to achieve developmental ASD biomarkers in the future.


fMRI-S4: learning short- and long-range dynamic fMRI dependencies using 1D Convolutions and State Space Models

Ahmed ElGazzar (AMC)*; Rajat Thomas (Amsterdam University Medical Center); Guido Van Wingen (AMC)

Abstract: Single-subject mapping of resting-state brain functional activity to non-imaging phenotypes is a major goal of neuroimaging. The large majority of learning approaches applied today rely either on static representations or on short-term temporal correlations. This is at odds with the nature of the brain activity which is dynamic and exhibits both short- and long-range dependencies. Further, new sophisticated deep learning approaches have been developed and validated on single
tasks/datasets. The application of these models for the study of a different target typically requires exhaustive hyperparameter search, model engineering, and trial and error to obtain competitive results with simple linear models. This in turn limits their adoption and hinders fair benchmarking in a rapidly developing area of research. To this end, we propose fMRI-S4; a versatile deep learning model for the classification of phenotypes and psychiatric disorders from the timecourses of resting-state functional magnetic resonance imaging (rs-fMRI) scans. fMRI-S4 capture short- and long-range temporal dependencies in the signal using 1D convolutions and the recently introduced state-space deep models S4. The proposed architecture is lightweight, sample-efficient, and robust across tasks/datasets. We validate fMRI-S4 on the tasks of diagnosing major depressive disorder (MDD), diagnosing autism spectrum disorder (ASD), and sex classification on three multi-site rs-fMRI datasets. We show that fMRI-S4 can outperform existing methods on all three tasks and can be trained as a plug&play model without special hyperparameter tuning for each setting.


Boundary Distance Loss for Intra-/Extra-meatal Segmentation of Vestibular Schwannoma

Navodini Wijethilake (King’s College London)*; Aaron Kujawa (King’s College London); Reuben Dorent (King’s College London); Muhammad Asad (King’s College London); Anna Oviedova (King’s College Hospital); Tom Vercauteren (King’s College London); Jonathan Shapey (King’s College London)

Abstract: Vestibular Schwannoma (VS) typically grows from the inner ear to the brain. It can be separated into two regions, intrameatal and extrameatal respectively corresponding to being inside or outside the inner ear canal. The growth of the extrameatal regions is a key factor that determines the disease management followed by the clinicians. In this work, a VS segmentation approach with subdivision into intra-/extra-meatal parts is presented. We annotated a dataset consisting of 227 T2 MRI instances, acquired longitudinally on 137 patients, excluding post-operative instances. We propose a staged approach, with the first stage performing the whole tumour segmentation and the second stage performing the intra-/extra-meatal segmentation using the T2 MRI along with the mask obtained from the first stage. To improve on the accuracy of the predicted meatal boundary, we introduce a task-specific loss which we call Boundary Distance Loss. The performance is evaluated in contrast to the direct intrameatal extrameatal segmentation task performance, i.e. the Baseline. Our proposed method, with the two-stage approach and the Boundary Distance Loss, achieved a Dice score of 0.8279+-0.2050 and 0.7744+-0.1352 for extrameatal and intrameatal regions respectively, significantly improving over the Baseline, which gave Dice score of 0.7939+-0.2325 and 0.7475+-0.1346 for the extrameatal and intrameatal regions respectively.


Self-Supervised Test-Time Adaptation for Medical Image Segmentation

Hao Li (Vanderbilt University)*; Han Liu (Vanderbilt University); Dewei Hu (Vanderbilt University); Jiacheng Wang (Vanderbilt University); Hans Johnson (The University of Iowa); Omar Sherbini (chop); Francesco Gavazzi (chop); Russell D’Aiello (chop); Adeline Vanderver (CHOP); Jeff Long (University of Iowa); Jane Paulsen (jane-paulsen@uiowa.edu); Ipek Oguz (Vanderbilt University)

Abstract: The performance of supervised convolutional neural networks (CNNs) often drops when they encounter a domain shift. Recently, unsupervised domain adaptation (UDA) and domain generalization (DG) techniques have been proposed to solve this problem. However, access to source domain data is required for UDA and DG approaches, which may not always be available in practice due to data privacy. In this paper, we propose a novel test-time adaptation framework for volumetric medical image segmentation without any source domain data for adaptation and target domain data for offline training. Specifically, our proposed framework only needs pre-trained CNNs in the source domain, and the target image itself. Our method aligns the target image on both image and latent feature levels to the source domain during the test time. There are three parts to our proposed framework: (1) multi-task segmentation network (Seg), (2) autoencoders (AEs), and (3) translation network (T). Seg and AEs are pre-trained with source domain data. At test time, the weights of these pre-trained CNNs (decoders of Seg and AEs) are fixed, and T is trained to align the target image to the source domain at the image level by the autoencoders which optimizes the similarity between input and reconstructed output. The encoder of Seg is also updated to increase the domain generalizability of the model towards the source domain at the feature level with self-supervised tasks. We evaluate our method on healthy controls, adult Huntington’s disease (HD) patients, and pediatric Aicardi Gouti`eres Syndrome (AGS) patients, with different scanners and MRI protocols. The results indicate that our proposed method improves the performance of CNNs in the presence of domain shift at test time.


Non-parametric ODE-based disease progression model of brain biomarkers in Alzheimer’s disease

Matías Bossa (Vrije Universiteit Brussel)*; Abel Díaz Berenguer (Vrije Universiteit Brussel); Hichem Sahli (Vrije Universiteit Brussel)

Abstract: Data-driven disease progression models of Alzheimer’s disease are important for clinical prediction model development, disease mechanism understanding, and clinical trial design. Among them, dynamical models are particularly appealing because they are intrinsically interpretable. Most dynamical models proposed so far are consistent with a linear chain of events, inspired by the amyloid cascade hypothesis. However, it is now widely acknowledged that disease progression is not fully compatible with this conceptual model, at least in sporadic Alzheimer’s disease, and more flexibility is needed to model the full spectrum of the disease. We propose a Bayesian model of the joint evolution of brain image-derived biomarkers based on explicitly modelling biomarkers’ velocities as a function of their current value and other subject characteristics. The model includes a system of ordinary differential equations to describe the biomarkers’ dynamics and sets a Gaussian process prior to the velocity field. We illustrate the model on amyloid PET SUVR and MRI-derived volumetric features from the ADNI study.


Learning Interpretable Regularized Ordinal Models from 3D Mesh Data for Neurodegenerative Disease Staging

Yuji Zhao (Illinois Institute of Technology); Max A. Laansma (Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy & Neurosciences, Amsterdam Neuroscience, Amsterdam, The Netherlands.); Eva M. van Heese (Amsterdam UMC, Vrije Universiteit Amsterdam, Department of Anatomy & Neurosciences, Amsterdam Neuroscience, Amsterdam, The Netherlands.); Conor Owens-Walton (University of Southern California); Laura M. Parkes (Division of Neuroscience and Experimental Psychology, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester Academic Health Science Centre, Manchester, UK.); Ines Debove (Department of Neurology, University Hospital Bern, Inselspital, University of Bern, Bern); Christian Rummel (Inselspital, University Hospital Bern); Roland Wiest (UniBern); Fernando Cendes (UNICAMP); Rachel P Guimaraes (UNICAMP); Clarissa Lin Yasuda (Unicamp); Jiun-Jie Wang (Department of Medical Imaging and Radiological Sciences, Chang Gung University, Taoyuan City, Taiwan.); Tim J. Anderson (Department of Medicine, University of Otago, Christchurch, Christchurch, New Zealand); John C. Dalrymple-Alford (New Zealand Brain Research Institute, Christchurch, New Zealand); Tracy R. Melzer (Department of Medicine, University of Otago, Christchurch, Christchurch, New Zealand); Toni L. Pitcher (Department of Medicine, University of Otago, Christchurch, Christchurch, New Zealand); Reinhold Schmidt (Department of Neurology, Clinical Division of Neurogeriatrics, Medical University Graz, Graz, Austria); Petra Schwingenschuh (Department of Neurology, Clinical Division of Neurogeriatrics, Medical University Graz, Graz, Austria); Gäetan Garraux (GIGA-CRC In Vivo Imaging, University of Liège, Liège, Belgium); Mario Rango (Excellence Center for Advanced MR Techniques and Parkinson’s Disease Center, Neurology Unit, Fondazione IRCCS Cà Granda Maggiore Policlinico Hospital, University of Milan, Milan, Italy); Letizia Squarcina (Scientific Institute IRCCS “E. Medea”); Sarah Al-Bachari (Faculty of Health and Medicine, The University of Lancaster, Lancaster, UK); Hedley C.A. Emsley (Division of Neuroscience and Experimental Psychology, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester Academic Health Science Centre, Manchester, UK); Johannes C. Klein (Department of Clinical Neurosciences, Division of Clinical Neurology, Oxford Parkinson’s Disease Centre, Nuffield, University of Oxford, Oxford, UK); Clare E. Mackay (Department of Psychiatry, University of Oxford, Oxford, UK); Michiel F. Dirkx (Department of Neurology and Center of Expertise for Parkinson & Movement Disorders, Donders Institute for Brain, Cognition and Behaviour, Radboud University); Rick Helmich (Department of Neurology and Center of Expertise for Parkinson & Movement Disorders, Donders Institute for Brain, Cognition and Behaviour, Radboud University); Francesca Assogna (Laboratory of Neuropsychiatry, IRCCS Santa Lucia Foundation, Rome, Italy); Fabrizio Piras (Laboratory of Neuropsychiatry, IRCCS Santa Lucia Foundation, Rome, Italy ); Joanna K. Bright (Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, California, USA); Gianfranco Spalletta (Laboratory of Neuropsychiatry, IRCCS Santa Lucia Foundation, Rome, Italy ); Kathleen Poston (Stanford University); Christine Lochner (Stellenbosch University, SA MRC Unit on Risk and Resilience in Mental Disorders); Corey T. McMillan ( University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA); Daniel Weintraub (University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania, USA); Jason Druzgal (Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA); Benjamin Newman (University of Virginia, Charlottesville, Virginia, USA); Odile van den Heuvel (Department of Anatomy & Neurosciences, Amsterdam Neuroscience, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam); Neda Jahanshad (University of Southern California); Paul Thompson (Imaging Genetics Center ); Ysbrand D. van der Werf (Department of Anatomy & Neurosciences, Amsterdam Neuroscience, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam); Boris Gutman (Illinois Institute of Technology)*

Abstract: We extend the sparse, spatially piecewise-contiguous linear classification framework for mesh-based data to ordinal logistic regression. The algorithm is intended for use with subcortical shape and cortical thickness data where progressive clinical staging is available, as is generally the case in neurodegenerative diseases. We apply the tool to Parkinson’s and Alzheimer’s disease staging. The resulting biomarkers predict Hoehn-Yahr and cognitive impairment stages at competitive accuracy; the models remain parsimonious and outperform one-against-all models in terms of the Akaike and Bayesian information criteria.


Data augmentation via partial nonlinear registration for brain-age prediction

Marc-Andre Schulz (Charité – Universitätsmedizin Berlin)*; Alexander Koch (Charité – Universitätsmedizin Berlin); Vanessa Emanuela Guarino (BIH/MDC); Dagmar Kainmueller (MDC); Kerstin Ritter (Charité – Universitätsmedizin Berlin)

Abstract: Data augmentation techniques that improve the classification and segmentation of natural scenes often do not transfer well to brain imaging data. The conceptually most plausible augmentation technique for biological tissue, elastic deformation, works well on microscopic tissue but is limited on macroscopic structures like the brain, as the majority of mathematically possible elastic deformations of the human brain are anatomically implausible. Here, we characterize the subspace of anatomically plausible deformations for a participant’s brain image by nonlinearly registering the image to the brain images of several reference participants. Using the resulting warp fields for data augmentation outperformed both random deformations and the non-augmented baseline in age prediction from T1 brain images.


Lifestyle factors that promote brain structural resilience in individuals with genetic risk factors for dementia

Elizabeth Haddad (University of Southern California)*; Shayan Javid (University of Southern California); Nikhil Dhinagar (University of Southern California); Alyssa H. Zhu (University of Southern California); Pradeep Lam (University of Southern California); Iyad S Ba Gari (University of Southern California); Arpana Gupta (University of California, Los Angeles); Paul Thompson (Imaging Genetics Center ); Talia Nir (Imaging Genetics Center); Neda Jahanshad (University of Southern California)

Abstract: Structural brain changes are commonly detectable on MRI before the progressive loss of cognitive function that occurs in individuals with Alzheimer’s disease and related dementias (ADRD). Some proportion of ADRD risk may be modifiable through lifestyle. Certain lifestyle factors may be associated with slower brain atrophy rates, even for individuals at high genetic risk for dementia. Here, we evaluated 44,100 T1-weighted brain MRIs and detailed lifestyle reports from UK Biobank participants who had one or more genetic risk factors for ADRD, including a family history of dementia, or one or two ApoE4 risk alleles. In this cross-sectional dataset, we use a machine-learning-based metric of age-predicted from cross-sectional brain MRIs – or ‘brain age’ – which when compared to the participant’s chronological age, may be considered a proxy for abnormal brain aging and degree of atrophy. We used a 3D convolutional neural network trained on T1w brain MRIs to identify the subset of genetically high-risk individuals with a substantially lower brain age than chronological age, which we interpret as resilient to neurodegeneration. We used association rule learning to identify sets of lifestyle factors that were frequently associated with brain-age resiliency. Never or rarely adding salt to food was consistently associated with resiliency. Sex-stratified analyses showed that anthropometry measures and alcohol consumption contribute differently to male vs female resilience. These findings may shed light on distinctive risk profile modifications that can be made to mitigate accelerated aging and risk for ADRD.


The MLCN2021 Best Paper Award

Based on the evaluation of the MLCN 2021 scientific committee on the scientific content, significance of the contribution, and clarity of the communication, the MLCN2021 best paper award is presented to Qiang Ma and his coauthors Emma C. Robbinson, Bernhard Kaintz, Daniel Rueckert, and Amir Alansary for the paper titled PialNN: A Fast Deep Learning Framework for Cortical Pial Surface Reconstruction. The MLCN2021 best paper award is sponsored by Donders Institute for Brain, Cognition and Behaviour.

Qiang Ma is a PhD student in the Department of Computing at Imperial College London (ICL). He works in the BioMedIA group under the supervision of Prof. Daniel Rueckert and Dr. Bernhard Kainz. Prior to joining ICL, he received his MS degree in Computer Science from Columbia University in 2020 and BS degree in Mathematics from Harbin Institute of Technology in 2018. His research interest is 3D geometric deep learning for surface reconstruction of medical images.

The winner is selected among 17 candidate papers. The runner-up papers include:

  • Wilms, Matthias, Pauline Mouches, Jordan J. Bannister, Deepthi Rajashekar, Sönke Langner, and Nils D. Forkert. “Towards Self-explainable Classifiers and Regressors in Neuroimaging with Normalizing Flows.”
  • Yang, Ne, Xiong Xiao, Xianyu Wang, Guocan Gu, Liwei Zhang, and Hongen Liao. “H3K27M Mutations Prediction for Brainstem Gliomas Based on Diffusion Radiomics Learning”.
  • Ramakrishnapillai, Sreekrishna, Harris R. Lieberman, Jennifer C. Rood, Stefan M. Pasiakos, Kori Murray, Preetham Shankapal, and Owen T. Carmichael. “Constrained Learning of Task-Related and Spatially-Coherent Dictionaries from Task fMRI Data.”

MLCN 2021 Keynote 1: Dr. Adrian Dalca

Title: Unsupervised Learning of Image Correspondences in Neuroimaging

Abstract: Image alignment, or registration, is fundamental to many neuroimaging tasks. Classical neuroimaging registration methods have undergone decades of technical development, but are often prohibitively slow since they solve an optimization problem for each 3D image pair. In this talk, I will first introduce the modern deep learning paradigm that enables deformable medical image registration more accurately and substantially faster than traditional methods. Based on these models I will discuss new learning frameworks now possible for a variety of tasks, such as building a new class of on-demand conditional templates to enable new neuroimaging applications. I will discuss other recent exciting directions, such as modality-invariant learning-based registration methods that work on unseen test-time contrasts, and hyperparameter-agnostic learning for neuroimage registration.

MLCN 2021 Keynote 2: Prof. Paul Thompson

Title: AI and Deep Learning in Medical Imaging and Genomics: Lessons from ENIGMA’s Global Studies of Brain Diseases

Abstract: AI and deep learning are rapidly advancing, with new mathematics and algorithms being developed daily to discover patterns in medical imaging data. AI can aid diagnosis, prognosis, and disease subtyping, and can show us how to tackle previously unimaginable problems, such as ‘cracking the brain’s genetic code’. With examples from two large scale initiatives – ENIGMA and AI4AD – we describe some of the triumphs and challenges in learning from medical imaging data collected across the world. Since 2009, ENIGMA (1) performed the largest neuroimaging studies of 13 major brain diseases, from Parkinson’s, epilepsy and ataxia to schizophrenia, depression, PTSD, autism, ADHD and OCD, and (2) led the largest genetic studies of the human brain. Over 2000 scientists take part in ENIGMA’s 51 working groups, pooling data from over 45 countries, and suggesting and tackling new problems. Deep learning methods including CNNs and RNN variants, variational autoencoders, and generative adversarial networks (GANs) are being applied to this data to answer key questions about the brain. We cover the basic ideas, pitfalls, and challenges in applying these machine learning and deep learning methods to diverse biomedical data worldwide. We illustrate some lessons learned from multisite machine learning projects that deal with data imbalance, domain shift and adaptation, and federated learning, as well as some unsolved problems and opportunities to take part in international machine learning challenges. 

A GAN architecture applied to MRIs.