The MLCN2020 Best Paper Award

Based on the evaluation of the MLCN 2020 scientific committee members on the scientific content, significance of the contribution, and clarity of the communication, the MLCN2020 best paper award is presented to Naresh Nandakumar and his coauthors Niharika  S. D’Souza, Komal Manzoor, Jay Pillai, Sachin Gujar, Haris Sair, and Archana Venkataraman for the paper entitled “A Multi-Task Deep Learning Framework to Localize the Eloquent Cortex in Brain Tumor Patients Using Dynamic Functional Connectivity”.

Naresh Nandakumar

Naresh is currently a 4th year PhD student in the Neural Systems Analysis lab at the Johns Hopkins University. He received his Bachelor’s from Vanderbilt University, where he did research in the MASI lab. His research interests lie at the intersection of statistical modeling/deep learning and clinical neuroimaging. He has presented at ISBI 2018, MICCAI 2018, MICCAI 2019 and MICCAI 2020. He develops machine learning models to tackle clinically relevant problems. Currently, he is working on eloquent cortex detection of brain tumor patients using resting-state fMRI connectivity. He is always eager to speak with prospective collaborators, you can reach him at or on his LinkedIn

The winner is selected among 18 accepted papers for the presentation at the MLCN 2020. The runner-up papers include:

  • Jianyuan Zhang, Feng Shi, Lei Chen, Zhong Xue, Lichi Zhang, and Dahong Qian: “Ischemic Stroke Segmentation from CT Perfusion Scans Using Cluster-Representation Learning
  • Matthias Wilms, Jordan J. Bannister, Pauline Mouches, M. Ethan MacDonald, Deepthi Rajashekar, Sönke Langner, and Nils D. Forkert: “Bidirectional Modeling and Analysis of Brain Aging with Normalizing Flows
  • Samuel Budd, Prachi Patkee, Ana Baburamani, Mary Rutherford, Emma C. Robinson, Bernhard Kainz: “Surface Agnostic Metrics for Cortical Volume Segmentation and Regression
  • Yannick Suter, Urspeter Knecht, Roland Wiest, Ekkehard Hewer, Philippe Schucht, and Mauricio Reyes: “Towards MRI Progression Features for Glioblastoma Patients: From Automated Volumetry and Classical Radiomics to Deep Feature Learning

Guidelines for Recording your Presentation using Zoom

Please make sure you have already prepared: 

  • Your computer with either Windows or MacOs.
  • Your presentation: Please use 16:9 aspect ratio for your slides.
  • An internet connection.
  • Headphones and a high-quality microphone.
  • A webcam.

To record your presentation, using the Zoom platform.

  1. Download and install the “Zoom Client for Meetings” Software from if you haven’t already.
  2. Double click on “Start Zoom” to launch the application.
  3. Sign In. If you don’t have an account, click “Sign Up Free”.
  4. After signing in, you will see the Home tab.
  5. Click on the gear icon ⚙ in the top-right corner to open the settings.
  6. In the Video tab, Select the camera you want Zoom to use “16:9 (Widescreen)”. Select “Enable HD”.
  7. In the audio tab, set your microphone and speaker, test them, and adjust their volume.
  8. In the recording tab, Verify the location for your local recordings and change it if necessary. Select “Optimize for 3rd party video editor”. Select “Record video during screen sharing”. Select “Place video next to the shared screen in the recording”.
  9. Close the settings and prepare to start recording. Close all applications except Zoom and your presentation. In the Zoom Client, start a new meeting from the “Home Tab”. Click “Share Screen” in the meeting controls.
  10. Select your presentation window, select the “Share Computer Sound” checkbox and click the “Share” button on the right. A green border indicates which window you are currently sharing.
  11. IMPORTANT: Please be sure to maximize the small floating window showing the webcam video to make it as large as possible. Do so by dragging the bottom-left corner of the window as wide as it can go.
    NOTE: This setting has a direct impact on the recorded video layout and will have a negative impact on the recording if not set properly.
  12. While sharing, switch the presentation software into slide show/presentation mode.
  13. In order to ensure that the webcam video does not overlap with your view of the slides, click in the center of the black bar at the top of the video screen and drag it to the bottom-right corner of your screen. Do not simply minimise this screen as this will affect the recording of you in the final video. Having the webcam video partially off screen will not impact on the recording of you in the final video. The final video will display your slides to one side of the screen and the recording of your webcam to the other.
  14. Start the recording in the Meeting Control -> More -> Record.
  15. Give your lecture and please make sure you do not go over your allotted time.
  16. Once you finish your lecture, end the meeting by clicking on the right-bottom corner red button “END”> “End Meeting for all”. The recording will stop automatically.
  17. After the meeting has ended, Zoom will convert the recording so you can access the files. If you have trouble finding your recorded video file, return to the Zoom Home tab, select “Meetings” and your recorded files on the left.
  18. Locate the .mp4 file of the recording and open it.
    NOTE: We will edit the beginning and end of the video for you so that it will play only from the start and end of your lecture.
    1. Review your lecture.
      • Are both the video on the right and the presentation on the left visible?
      • Is the audio clear?
      • Are you happy with the overall lecture?
    1. If you are happy with your video, please upload the file to the link provided to you in the email regarding these guidelines.
    2. If you are not happy with your video, please go back to the beginning of this section and record it again.


MLCN 2020: Keynote by Dr. Duygu Tosun

Title: Impact of AI and deep learning on imaging of neurodegenerative diseases

Abstract: Biomarkers have become increasingly important to understand the biology of neurodegenerative diseases. We now see a paradigm shift recasting the definition of neurodegenerative disease in living people from syndromal to a biological construct. Effective implementation of such biological constructs though requires widespread availability of biomarkers. This talk will address some of the challenges and AI based advances in neuroimaging-based biomarkers for faster, safer, and smarter operationalization of biomarker-based classification, risk assessment, diagnosis, prognosis, and even prediction of therapy responses in neurodegenerative diseases.

MLCN 2020: Keynote by Dr. Jorge Cardoso

Title: AI-enabled Neurology, Dealing with the real world

Abstract: Recent developments in artificial intelligence and the availability of large scale medical imaging datasets allow us to learn how the human brain truly looks like from a biological, physiological, anatomical and pathological point-of-view. This learning process can be further augmented by diagnostic and radiological report data available in clinical systems, providing an integrated view of the human interpretation of medical imaging data. This talk will present how these models can learn from big and unstructured data and then be used as tools for precision medicine, where we aim to translate advanced imaging technologies and biomarkers to clinical practice in order to streamline the clinical workflow and improve the quality of care. This process of technical translation requires deep algorithmic integration into the radiological workflow, fully automated image processing, quality control and assurance, extensive validation on clinical grade data, and the deployment of an automated reporting system that summarizes a complex set of imaging biomarkers, highlighting the presence of abnormalities.

MLCN 2020: Accepted Papers

Surface Agnostic Metrics for Cortical Volume Segmentation and Regression

Samuel F Budd (Imperial College London)*; Prachi Patkee (King’s College); Ana Baburamani (King’s College); Mary Rutherford (KCL); Emma C Robinson (King’s College); Bernhard Kainz (Imperial College London)

Abstract: The cerebral cortex performs higher-order brain functions and is thus implicated in a range of cognitive disorders. Current analysis of cortical variation is typically performed by fitting surface mesh models to inner and outer cortical boundaries and investigating metrics such as surface area and cortical curvature or thickness. These, however, take a long time to run, and are sensitive to motion and image and surface resolution, which can prohibit their use in clinical settings. In this paper, we instead propose a machine learning solution, training a novel architecture to predict cortical thickness and curvature metrics from T2 MRI images, while additionally returning metrics of prediction uncertainty. Our proposed model is tested on a clinical cohort (Down Syndrome) for which surface-based modelling often fails. Results suggest that deep convolutional neural networks are a viable option to predict cortical metrics across a range of brain development stages and pathologies.

Automatic Tissue Segmentation with Deep Learning in Patients with Congenital or Acquired Distortion of Brain Anatomy

Gabriele Amorosino (Fondazione Bruno Kessler and University of Trento)*; Denis Peruzzo (Scientific Institute, IRCCS Eugenio Medea); Pietro Astolfi (Fondazione Bruno Kessler and University of Trento); Daniela Redaelli (Scientific Institute, IRCCS Eugenio Medea); Paolo Avesani (Fondazione Bruno Kessler); Filippo Arrigoni (Scientific Institute, IRCCS Eugenio Medea); Emanuele Olivetti (Fondazione Bruno Kessler and University of Trento)

Abstract: Brains with complex distortion of cerebral anatomy present several challenges to automatic tissue segmentation methods of T1-weighted MR images. First, the very high variability in the morphology of the tissues can be incompatible with the prior knowledge embedded within the algorithms. Second, the availability of MR images of distorted brains is very scarce, so the methods in the literature have not addressed such cases so far. In this work, we present the first evaluation of state-of-the-art automatic tissue segmentation pipelines on T1-weighted images of brains with different severity of congenital or acquired brain distortion. We compare traditional pipelines and a deep learning model, i.e. a 3D U-Net trained on normal-appearing brains. Unsurprisingly, traditional pipelines completely fail to segment the tissues with strong anatomical distortion. Surprisingly, the 3D U-Net provides useful segmentations that can be a valuable starting point for manual refinement by experts/neuroradiologists.

Bidirectional Modeling and Analysis of Brain Aging with Normalizing Flows

Matthias Wilms (University of Calgary)*; Jordan J. Bannister (University of Calgary); Pauline Mouches (University of Calgary); M. Ethan MacDonald (University of Calgary); Deepthi Rajashekar (University of Calgary); Sönke Langner (Rostock University Medical Center); Nils Daniel Forkert (Department of Radiology & Hotchkiss Brain Institute, University of Calgary)

Abstract: Brain aging is a widely studied longitudinal process throughout which the brain undergoes considerable morphological changes and various machine learning approaches have been proposed to analyze it. Within this context, brain age prediction from structural MR images and age-specific brain morphology template generation are two problems that have attracted much attention. While most approaches tackle these tasks independently, we assume that they are inverse directions of the same functional bidirectional relationship between a brain’s morphology and an age variable. In this paper, we propose to model this relationship with a single conditional normalizing flow, which unifies brain age prediction and age-conditioned generative modeling in a novel way. In an initial evaluation of this idea, we show that our normalizing flow brain aging model can accurately predict brain age while also being able to generate age-specific brain morphology templates that realistically represent the typical aging trend in a healthy population. This work is a step towards unified modeling of functional relationships between 3D brain morphology and clinical variables of interest with powerful normalizing flows.

A Multi-Task Deep Learning Framework to Localize the Eloquent Cortex in Brain Tumor Patients Using Dynamic Functional Connectivity

Naresh Nandakumar (JHU)*; Niharika S. D’Souza (The Johns Hopkins University); Komal Manzoor (JHU); Jay Pillai (Johns Hopkins RAIL); Sachin Gujar (Johns Hopkins University); Haris Sair (Johns Hopkins RAIL); Archana Venkataraman (Johns Hopkins University)

Abstract: We present a novel deep learning framework that uses dynamic functional connectivity to simultaneously localize the language and motor areas of the eloquent cortex in brain tumor patients. Our method leverages convolutional layers to extract graph-based features from the dynamic connectivity matrices and a long-short term memory (LSTM) attention network to weight the relevant time points during classification. The final stage of our model employs multi-task learning to identify different eloquent subsystems. Our unique training strategy finds a shared representation between the cognitive networks of interest, which enables us to handle missing patient data. We evaluate our method on resting-state fMRI data from 56 brain tumor patients while using task fMRI activations as surrogate ground-truth labels for training and testing. Our model achieves higher localization accuracies than conventional deep learning approaches and can identify bilateral language areas even when trained on left-hemisphere lateralized cases. Hence, our method may ultimately be useful for preoperative mapping in tumor patients.

Deep Learning for Non-Invasive Cortical Potential Imaging

Alexandra Razorenova (Skolkovo Institute of Science and Technology)*; Nikolay Yavich (Skolkovo Institute of Science and Technology); Mikhail Malovichko (Skolkovo Institute of Science and Technology); Maxim Fedorov (Skolkovo Institute of Science and Technology); Nikolay Koshev (Skolkovo Institute of Science and Technology); Dmitry V. Dylov (Skolkovo Institute of Science and Technology)

Abstract: Electroencephalography (EEG) is a well-established non-invasive technique to measure the brain activity, albeit with a limited spatial resolution. Variations in electric conductivity between different tissues distort the electric fields generated by cortical sources, resulting in smeared potential measurements on the scalp. One needs to solve an ill-posed inverse problem to recover the original neural activity. In this article, we present a generic method of recovering the cortical potentials from the EEG measurement by introducing a new inverse-problem solver based on deep Convolutional Neural Networks (CNN) in paired (U-Net) and unpaired (DualGAN) configurations. The solvers were trained on synthetic EEG-ECoG pairs that were generated using a head conductivity model computed using the Finite Element Method (FEM). These solvers are the first of their kind, that provide robust translation of EEG data to the cortex surface using deep learning. Providing a fast and accurate interpretation of the tracked EEG signal, our approach promises a boost to the spatial resolution of the future EEG devices.

An anatomically-informed 3D CNN for brain aneurysm classification with weak labels

Tommaso Di Noto (Unil – CHUV)*; Guillaume Marie (CHUV); Sébastien Tourbier (University of Lausanne (UNIL)); Yasser Alemán-Gómez (CHUV); Guillaume Saliou (CHUV); Meritxell Bach Cuadra (UNIL); Patric Hagmann (Lausanne University Hospital (CHUV)); Jonas Richiardi (Lausanne University Hospital (CHUV))

Abstract: A commonly adopted approach to carry out detection tasks in medical imaging is to rely on an initial segmentation. However, this approach strongly depends on voxel-wise annotations which are repetitive and time-consuming to draw for medical experts. An interesting alternative to voxel-wise masks are so-called “weak” labels: these can either be coarse or oversized annotations that are less precise, but noticeably faster to create. In this work, we address the task of brain aneurysm detection as a patch-wise binary classification with weak labels, in contrast to related studies that rather use supervised segmentation methods and voxel-wise delineations. Our approach comes with the non-trivial challenge of the data set creation: as for most focal diseases, anomalous patches (with aneurysm) are outnumbered by those showing no anomaly, and the two classes usually have different spatial distributions. To tackle this frequent scenario of inherently imbalanced, spatially skewed data sets, we propose a novel, anatomically-driven approach by using a multi-scale and multi-input 3D Convolutional Neural Network (CNN). We apply our model to 214 subjects (83 patients, 131 controls) who underwent Time-Of-Flight Magnetic Resonance Angiography (TOF-MRA) and presented a total of 111 unruptured cerebral aneurysms. We compare two strategies for negative patch sampling that have an increasing level of difficulty for the network and we show how this choice can strongly affect the results. To assess whether the added spatial information helps improving performances, we compare our anatomically-informed CNN with a baseline,spatially-agnostic CNN. When considering the more realistic and challenging scenario including vessel-like negative patches, the former model attains the highest classification results (accuracy≈95%, AUROC≈0.95,AUPR≈0.71), thus outperforming the baseline.

Ischemic Stroke Segmentation from CT Perfusion Scans Using Cluster-Representation Learning

Jianyuan Zhang (Shanghai Jiao Tong University); Feng Shi (Shanghai United Imaging Intelligence Co., Ltd.); Lei Chen (Shanghai United Imaging Intelligence Co.,Ltd); Zhong Xue (Shanghai United Imaging Intelligence Co., Ltd); Lichi Zhang (Shanghai Jiao Tong University); Dahong Qian (Shanghai Jiao Tong Univerisity)*

Abstract:Computed Tomography Perfusion (CTP) images have drawn extensive attention in acute ischemic stroke assessment due to its imaging speed and ability to provide dynamic perfusion quantification. However, the cerebral ischemic infarcted core has high individual variability and low contrast, and multiple CTP parametric maps need to be referred for precise delineation of the core region. It has thus become a challenging task to develop automatic segmentation algorithms. The widely applied segmentation algorithms such as U-Net lack specific modeling for image subtype in the dataset, and thus the performance remains unsatisfactory. In this paper, we propose a novel cluster-representation learning approach to address these difficulties. Specifically, we first cluster the training samples based on their similarities of the segmentation difficulty. Each cluster represents a different subtype of training images and is then used to train its own cluster-representative model. The models will be capable of extracting cluster-representative features from training samples as clustering priors, which are further fused into an overall segmentation model (for all training samples). The fusion mechanism is able to adaptively select optimal subset(s) of clustering priors which can further guide the segmentation of each unseen testing image and reduce influences from high variability of CTP images. We have applied our method on 94 subjects of ISLES 2018 dataset. By comparing with the baseline U-Net, the experiments have shown an absolute increase of 8% in Dice score and a reduction of 10mm in Hausdorff Distance for ischemic infarcted core segmentation. This method can also be generalized to other U-Net-like architectures to further improve their representative capacity.

SeizureNet: Multi-Spectral Deep Feature Learning for Seizure Type Classification

Umar Asif (IBM Research)*; Subhrajit Roy (IBM Research); Jianbin Tang (IBM Research); Stefan Harrer (IBM Research)

Abstract: Automatic classification of epileptic seizure types in electroencephalograms (EEGs) data can enable more precise diagnosis and efficient management of the disease. This task is challenging due to factors such as low signal-to-noise ratios, signal artefacts, high variance in seizure semiology among epileptic patients, and limited availability of clinical data. To overcome these challenges, in this paper, we present SeizureNet, a deep learning framework which learns multi-spectral feature embeddings using an ensemble architecture for cross-patient seizure type classification We used the recently released TUH EEG Seizure Corpus (V1.4.0 and V1.5.2) to evaluate the performance of SeizureNet. Experiments show that SeizureNet can reach a weighted F1 score of up to 0.941 for seizure-wise cross validation and 0.620 for patient-wise cross validation for scalp EEG based multi-class seizure type classification. We also show that the high-level feature embeddings learnt by SeizureNet considerably improve the accuracy of smaller networks through knowledge distillation for applications with low-memory constraints.

Decoding task states by spotting salient patterns at time points and brain regions

Yi Hao Chan (Nanyang Technological University); Sukrit Gupta (Nanyang Technological University); Liyanaarachchi Lekamalage Chamara Kasun (Nanyang Technological University); Jagath C. Rajapakse (Nanyang Technological University)*

Abstract: During task performance, brain states change dynamically and can appear recurrently. Recently, recurrent neural networks (RNN) have been used for identifying functional signatures underlying such brain states from task functional Magnetic Resonance Imaging (fMRI) data. While RNNs only model temporal dependence between time points, brain task decoding needs to model temporal dependencies of the under- lying brain states. Furthermore, as only a subset of brain regions are involved in task performance, it is important to consider subsets of brain regions for brain decoding. To address these issues, we present a customised neural network architecture, Salient Patterns Over Time and Space (SPOTS), which not only captures dependencies of brain states at different time points but also pays attention to key brain regions associated with the task. On language and motor task data gathered in the Human Connectome Project, SPOTS improves brain state prediction by 17% to 40% as compared to the baseline RNN model. By spotting salient spatio-temporal patterns, SPOTS is able to infer brain states even on small time windows of fMRI data, which the present state-of- the-art methods struggle with. This allows for quick identification of ab- normal task-fMRI scans, leading to possible future applications in task- fMRI data quality assurance and disease detection. Code is available at

Patch-based Brain Age Estimation from MR Images

Kyriaki-Margarita Bintsi (Imperial College London )*; Vasileios Baltatzis (King’s College London); Arinbjorn Kolbeinsson (Imperial College); Alexander Hammers (King’s College London); Daniel Rueckert (Imperial College London)

Abstract: Brain age estimation from Magnetic Resonance Images (MRI) derives the difference between a subject’s biological brain age and their chronological age. This is a potential biomarker for neurodegeneration, e.g. as part of Alzheimer’s disease. Early detection of neurodegeneration manifesting as a higher brain age can potentially facilitate better medical care and planning for affected individuals. Many studies have been proposed for the prediction of chronological age from brain MRI using machine learning and specifically deep learning techniques. Contrary to most studies, which use the whole brain volume, in this study, we develop a new deep learning approach that uses 3D patches of the brain as well as convolutional neural networks (CNNs) to develop a localised brain age estimator. In this way, we can obtain a visualization of the regions that play the most important role for estimating brain age, leading to more anatomically driven and interpretable results, and thus confirming relevant literature which suggests that the ventricles and the hippocampus are the areas that are most informative. In addition, we leverage this knowledge in order to improve the overall performance on the task of age estimation by combining the results of different patches using an ensemble method, such as averaging or linear regression. The network is trained on the UK Biobank dataset and the method achieves state-of-the-art results with a Mean Absolute Error of 2.46 years for purely regional estimates, and 2.13 years for an ensemble of patches before bias correction, while 1.96 years after bias correction.

Large-scale Unbiased Neuroimage Indexing via 3D GPU-SIFT Filtering and Keypoint Masking

Etienne Pepin (École de Technologie Supérieure); Jean-baptiste Carluer (Université de Nantes); Laurent Chauvin (École de Technologie Supérieure); Matthew Toews (Canada)*; Rola Harmouche (CNRC)

Abstract: We propose a feature extraction method via a novel description and a scalable GPU implementation (the first to our knowledge) of the 3D scale-invariant feature transform (SIFT). The feature extraction is first represented as a shallow convolutional neural network with pre-computed filters, followed by a masked keypoint analysis. We use the implementation in order to investigate feature extraction for specific instance identification on natural non-skull-stripped magnetic resonance image (MRI) neuroimaging data. The proposed implementation is invariant to 3D similarity transforms and aims to improve robustness by reducing noise and bias for image processing convolution operations. We show interpretable feature visualizations, which help explain the obtained results. We demonstrate state-of-the-art results in large-scale neuroimage family indexing experiments on 3D data from the Human Connectome Project repository, and show significant speed gains compared to a CPU implementation. The results imply that using feature extraction using SIFT for neuroimaging analysis can lead to less noisy results without the need for hard masking during preprocessing. The resulting interpretable features can help understand brain similarities between family members, and can also be used on arbitrary image modalities and anatomical structures.

A Longitudinal Method for Simultaneous Whole-Brain and Lesion Segmentation in Multiple Sclerosis

Stefano Cerri (Technical University of Denmark)*; Andrew Hoopes (MGH); Douglas Greve (Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital); Mark Muehlau (Abteilung für diagnostische und interventionelle Neuroradiologie, Klinikum rechts der Isar); Koen Van Leemput (Technical University of Denmark)

Abstract: In this paper we propose a novel method for the segmentation of longitudinal brain MRI scans of patients suffering from Multiple Sclerosis. The method builds upon an existing cross-sectional method for simultaneous whole-brain and lesion segmentation, introducing subject-specific latent variables to encourage temporal consistency between longitudinal scans. It is very generally applicable, as it does not make any prior assumptions on the scanner, the MRI protocol, or the number and timing of longitudinal follow-up scans. Preliminary experiments on three longitudinal datasets indicate that the proposed method produces more reliable segmentations and detects disease effects better than the cross-sectional method it is based upon.

Towards MRI Progression Features for Glioblastoma Patients: From Automated Volumetry and Classical Radiomics to Deep Feature Learning

Yannick R Suter (Insel Data Science Center, Bern University Hospital, Bern, Switzerland; ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland)*; Urspeter Knecht (ARTORG Cener for Biomedical Engineering Research, University of Bern, and Spital Emmental); Roland Wiest (UniBern); Ekkehard Hewer (Institute of Pathology, University of Bern); Philippe Schucht (Inselspital, University Hospital Bern); Mauricio Reyes (University of Bern, Healthcare Imaging A.I.)

Abstract: Disease progression for Glioblastoma multiforme patients is currently assessed with manual bi-dimensional measurements of the active contrast-enhancing tumor on Magnetic Resonance Images (MRI). This method is known to be susceptible to error; in the lack of a data-driven approach, progression thresholds had been set rather arbitrarily considering measurement inaccuracies. We propose a data-driven methodology for disease progression assessment, building on tumor volumetry, classical radiomics, and deep learning based features. For each feature type, we infer progression thresholds by maximizing the correlation of the time-to-progression (TTP) and overall survival (OS). On a longitudinal study comprising over 500 data points, we observed considerable underestimation of the current volumetric disease progression threshold. We evaluate the data-driven disease progression thresholds based on expert ratings using the current clinical practice.

Generalizing MRI subcortical segmentation to neurodegeneration

Hao Li (Vanderbilt University); Huahong Zhang (Vanderbilt University); Dewei Hu (Vanderbilt University); Hans Johnson (The University of Iowa); Jeff Long (University of Iowa); Jane Paulsen (; Ipek Oguz (Vanderbilt University)*

Abstract: Many neurodegenerative diseases like Huntington’s disease (HD) affect the subcortical structures of the brain, especially the caudate and the putamen. Automated segmentation of subcortical structures from MRI scans is thus important in HD studies. LiviaNET is the state-of-the-art deep learning approach for subcortical segmentation.<br/>As all learning-based models, this approach requires appropriate training data. While annotated healthy control images are relatively easy to obtain, generating such annotations for each new disease population can be prohibitively expensive.<br/>In this work, we explore LiviaNET variants using well-known strategies for improving performance, to make it more generalizable to patients with substantial neurodegeneration. Specifically, we explored Res-blocks in our convolutional neural network, and we also explored manipulating the input to the network as well as random elastic deformations for data augmentation. <br/>We tested our method on images from the PREDICT-HD dataset, which includes control and HD subjects. We trained on control subjects and tested on both controls and HD patients. Compared to the original LiviaNET, we improved the accuracy of most structures, both for controls and for HD patients. The caudate has the most pronounced improvement in HD subjects with the proposed modifications to LiviaNET, which is noteworthy since caudate is known to be severely atrophied in HD. This suggests our extensions may improve the generalization ability of LiviaNET to cohorts where significant neurodegeneration is present, without needing to be retrained.

Multiple Sclerosis Lesion Segmentation Using Longitudinal Normalization and Convolutional Recurrent Neural Networks

Sergio Tascon-Morales (mediri GmbH, University of Girona, University of Cassino and Southern Lazio, University of Burgundy)*; Stefan Hoffmann (mediri GmbH); Martin Treiber (mediri GmbH); Daniel Mensing (mediri GmbH); Arnau Oliver (University of Girona); Matthias Guenther (“Fraunhofer MEVIS, Germany”); Johannes Gregori (mediri GmbH)

Abstract: Magnetic resonance imaging (MRI) is the primary clinical tool to examine inflammatory brain lesions in Multiple Sclerosis (MS). Disease progression and inflammatory activities are examined by longitudinal image analysis to support diagnosis and treatment decision. Automated lesion segmentation methods based on deep convolutional neural networks (CNN) have been proposed, but are not yet applied in the clinical setting. Typical CNNs working on cross-sectional single time-point data have several limitations: changes to the image characteristics between single examinations due to scanner and protocol variations have an impact on the segmentation output, while at the same time the additional temporal correlation using pre-examinations is disregarded. In this work, we investigate approaches to overcome these limitations. Within a CNN architectural design, we propose convolutional Long Short-Term Memory (C-LSTM) networks to incorporate the temporal dimension. To reduce scanner- and protocol dependent variations between single MRI exams, we propose a histogram normalization technique as pre-processing step. The ISBI 2015 challenge data was used for network training and cross-validation. We demonstrate that the combination of the longitudinal normalization and CNN architecture increases the performance and the inter-time-point stability of the lesion segmentation. In the combined solution, the dice coefficient was increased and made more consistent for each subject. The proposed methods can therefore be used to increase the performance and stability of fully automated lesion segmentation applications in the clinical routine or in clinical trials.

Deep Voxel-Guided Morphometry (VGM): Learning regional brain changes in serial MRI

Alena-Kathrin Schnurr (Computer Assisted Clinical Medicine, Mannheim Institute for Intelligent Systems in Medicine, Medical Faculty Mannheim, Heidelberg University, Mannheim)*; Philipp Eisele (Department of Neurology, University Medical Center Mannheim and Mannheim Center for Translational Neurosciences, Heidelberg University, Mannheim); Christina Rossmanith (Department of Neurology, University Medical Center Mannheim, Medical Faculty Mannheimn, Heidelberg University, Mannheim); Stefan Hoffmann (mediri GmbH); Johannes Gregori (mediri GmbH); Andreas Dabringhaus (Brainalyze GbR, Cologne); Matthias Kraemer (Brainalyze GbR, Cologne); Raimar Kern (MedicalSyn GmbH, Stuttgart); Achim Gass (Department of Neurology, University Medical Center Mannheim, Medical Faculty Mannheimn, Heidelberg University, Mannheim); Frank G Zoellner (Heidelberg University)

Abstract: Change detection and progression assessment in multiple sclerosis (MS) by serial magnetic resonance imaging (MRI) are important, yet challenging tasks. Analysis algorithms such as Voxel-Guided Morphometry (VGM) enable detection and quantification of even minor changes of the brain at different time points. To shorten computation times and ameliorate clinical applicability, we developed a convolutional neural network based VGM (Deep VGM) providing a fast solution for intra-individual serial volume change analysis in MS. We developed a residual architecture based on the 3D U-Net and investigated several loss functions to predict VGM maps from a base line and a follow up brain MRI. We train and test our approach in 71 MS patients. The Deep VGM maps are compared to the respective VGM maps via several image metrics and rated by an experienced neurologist. Deep VGM configured with the Mean Absolute Error and Gradient loss outperformed all other tested loss functions. Deep VGM maps showed high similarity to the original VGM maps (SSIM = 0.9521 ± 0.0236). This was additionally confirmed by a neurologist analysing the MS lesions. Deep VGM resulted in a 3% lesion error rate compared to the original VGM approach. Computation time of Deep VGM was 99.62% shorter than VGM. Our experiments demonstrate that Deep VGM can approximate the complex VGM mapping at high quality while saving computation time.

A deep transfer learning framework for 3D brain imaging based on optimal mass transport

Ling-Li Zeng (National University of Defense Technology)*; Chris Ching (Imaging Genetics Center, USC); Zvart Abaryan (Imaging Genetics Center, USC); Sophia Thomopoulos (Imaging Genetics Center, University of Southern California); Kai Gao (National University of Defense Technology); Alyssa Zhu (Imaging Genetics Center, USC); Anjanibhargavi Ragothaman (Imaging Genetics Center, USC); Faisal Rashid (Imaging Genetics Center, USC); Marc Harrison (Imaging Genetics Center, USC); Lauren Salminen (Imaging Genetics Center, USC); Brandalyn Riedel (Indiana University School of Medicine); Neda Jahanshad (Imaging Genetics Center, USC); Dewen Hu ( National University of Defense Technology); Paul Thompson (Imaging Genetics Center )

Abstract: Deep learning has attracted increasing attention in brain imaging, but many neuroimaging data samples are small and fail to meet the training data requirements to optimize performance. In this study, we propose a deep transfer learning network based on Optimal Mass Transport (OMTNet) for 3D brain image classification using MRI scans from the UK Biobank. The major contributions of the OMTNet method include: a way to map 3D surface-based vertex-wise brain shape metrics, including cortical thickness, surface area, curvature, sulcal depth, and subcortical radial distance and surface Jacobian determinant metrics, onto 2D planar images for each MRI scan based on area-preserving mapping. Such that some popular 2D convolution neural networks pretrained on the ImageNet database, such as ResNet152 and DenseNet201, can be used for transfer learning of brain shape metrics. We used a score-fusion strategy to fuse all shape metrics and generate an ensemble classification. We tested the approach in a classification task conducted on 26k participants from the UK Biobank, using body mass index (BMI) thresholds as classification labels (normal vs. obese BMI). Ensemble classification accuracies of 72.8±1.2% and 73.9±2.3% were obtained for ResNet152 and DenseNet201 networks that used transfer learning, with 5.4-12.3% and 6.1-13.0% improvements relative to classifications based on single shape metrics, respectively. Transfer learning always outperformed direct learning and conventional linear support vector machines with 3.4-8.7% and 4.9-6.0% improvements in ensemble classification accuracies, respectively. Our proposed OMTNet method may offer a powerful transfer learning framework that can be extended to other vertex-wise brain structural/functional imaging measures.

Communicative Reinforcement Learning Agents for Landmark Detection in Brain Images

Guy Leroy (Imperial College London); Daniel Rueckert (Imperial College London); Amir Alansary (Imperial College London)*

Abstract: Accurate detection of anatomical landmarks is an essential step in several medical imaging tasks. We propose a novel communicative multi-agent reinforcement learning (C-MARL) system to automatically detect landmarks in 3D brain images. C-MARL enables the agents to learn explicit communication channels, as well as implicit communication signals by sharing certain weights of the architecture among all the agents. The proposed approach is evaluated on two brain imaging datasets from adult magnetic resonance imaging (MRI) and fetal ultrasound scans. Our experiments show that involving multiple cooperating agents by learning their communication with each other outperforms previous approaches using single agents.