MLCN 2021 Keynote 1: Dr. Adrian Dalca

Title: Unsupervised Learning of Image Correspondences in Neuroimaging

Abstract: Image alignment, or registration, is fundamental to many neuroimaging tasks. Classical neuroimaging registration methods have undergone decades of technical development, but are often prohibitively slow since they solve an optimization problem for each 3D image pair. In this talk, I will first introduce the modern deep learning paradigm that enables deformable medical image registration more accurately and substantially faster than traditional methods. Based on these models I will discuss new learning frameworks now possible for a variety of tasks, such as building a new class of on-demand conditional templates to enable new neuroimaging applications. I will discuss other recent exciting directions, such as modality-invariant learning-based registration methods that work on unseen test-time contrasts, and hyperparameter-agnostic learning for neuroimage registration.

MLCN 2021 Keynote 2: Prof. Paul Thompson

Title: AI and Deep Learning in Medical Imaging and Genomics: Lessons from ENIGMA’s Global Studies of Brain Diseases

Abstract: AI and deep learning are rapidly advancing, with new mathematics and algorithms being developed daily to discover patterns in medical imaging data. AI can aid diagnosis, prognosis, and disease subtyping, and can show us how to tackle previously unimaginable problems, such as ‘cracking the brain’s genetic code’. With examples from two large scale initiatives – ENIGMA and AI4AD – we describe some of the triumphs and challenges in learning from medical imaging data collected across the world. Since 2009, ENIGMA (1) performed the largest neuroimaging studies of 13 major brain diseases, from Parkinson’s, epilepsy and ataxia to schizophrenia, depression, PTSD, autism, ADHD and OCD, and (2) led the largest genetic studies of the human brain. Over 2000 scientists take part in ENIGMA’s 51 working groups, pooling data from over 45 countries, and suggesting and tackling new problems. Deep learning methods including CNNs and RNN variants, variational autoencoders, and generative adversarial networks (GANs) are being applied to this data to answer key questions about the brain. We cover the basic ideas, pitfalls, and challenges in applying these machine learning and deep learning methods to diverse biomedical data worldwide. We illustrate some lessons learned from multisite machine learning projects that deal with data imbalance, domain shift and adaptation, and federated learning, as well as some unsolved problems and opportunities to take part in international machine learning challenges. 

A GAN architecture applied to MRIs.

MLCN 2021 Accepted Papers

Towards Self-Explainable Classifiers and Regressors in Neuroimaging with Normalizing Flows

Matthias Wilms (University of Calgary)*; Pauline Mouches (University of Calgary); Jordan J. Bannister (University of Calgary); Deepthi Rajashekar (University of Calgary); Sönke Langner (Rostock University Medical Center); Nils Daniel Forkert (Department of Radiology & Hotchkiss Brain Institute, University of Calgary)

Abstract: Deep learning-based regression and classification models are used in most subareas of neuroimaging because of their accuracy and flexibility. While such models achieve state-of-the-art results in many different applications scenarios, their decision-making process is usually difficult to explain. This black box behaviour is problematic when non-technical users like clinicians and patients need to trust them and make decisions based on their results. In this work, we propose to build self-explainable generative classifiers and regressors using a flexible and efficient normalizing flow framework. We directly exploit the invertibility of those normalizing flows to explain the decision-making process in a highly accessible way via consistent and spatially smooth attribution maps and counterfactual images for alternate prediction results. The evaluation using more than 5000 3D MR images highlights the explainability capabilities of the proposed models and shows that they achieve a similar level of accuracy as standard convolutional neural networks for image-based brain age regression and brain sex classification tasks.

Robust Hydrocephalus Brain Segmentation via Globally and Locally Spatial Guidance

Yuanfang Qiao (Shanghai Jiao Tong University)*; Haoyi Tao (Shanghai Jiao Tong University); Jiayu Huo (Shanghai Jiao Tong University); Wenjun Shen (Shanghai Jiao Tong University); Qian Wang (Shanghai Jiao Tong University); Lichi Zhang (Shanghai Jiao Tong University)

Abstract: Segmentation of brain regions for hydrocephalus MR images is pivotally important for quantitatively evaluating patients’ abnormalities. However, the brain image data obtained from hydrocephalus patients always have large deformations and lesion occupancies compared to the normal subjects. This leads to the disruption of the brain’s anatomical structure and the dramatic changes in the shape and location of the brain regions, which poses a significant challenge to the segmentation task. In this paper, we propose a novel segmentation framework, with two modules to better locate and segment these highly distorted brain regions. First, to provide the global anatomical structure information and the absolute position of target regions for segmentation, we use a dual-path registration network which is incorporated into the framework and trained simultaneously together. Second, we develop a novel Positional Correlation Attention Block (PCAB) to introduce the local prior information about the relative positional correlations between different regions, so that the segmentation network can be guided in locating the target regions. In this way, the segmentation framework can be trained with spatial guidance from both global and local positional priors to ensure the robustness of the segmentation. We evaluated our method on the brain MR data of hydrocephalus patients by segmenting 17 consciousness-related ROIs and demonstrated that the proposed method can achieve high performance on the image data with high variations of deformations.

Dynamic Adaptive Spatio-temporal Graph Convolution for fMRI Modelling

Ahmed ElGazzar (AMC)*; Rajat Thomas (Amsterdam University Medical Center); Guido Van Wingen (AMC)

Abstract: The characterisation of the brain as a functional network in which the connections between brain regions are represented by correlation values across time series has been very popular in the last years. Although this representation has advanced our understanding of brain function, it represents a simplified model of brain connectivity that has a complex dynamic spatio-temporal nature. Oversimplification of the data may hinder the merits of applying advanced non-linear feature extraction algorithms. To this end, we propose a dynamic adaptive spatio-temporal graph convolution (DAST-GCN) model to overcome the shortcomings of pre-defined static correlation-based graph structures. The proposed approach allows end-to-end inference of dynamic connections between brain regions via layer-wise graph structure learning module while mapping brain connectivity to a phenotype in a supervised learning framework. This leverages the computational power of the model, data and targets to represent brain connectivity, and could enable the identification of potential biomarkers for the supervised target in question. We evaluate our pipeline on the UKBiobank dataset for age and gender classification tasks from resting-state functional scans and show that it outperforms currently adapted linear and non-linear methods in neuroimaging. Further, we assess the generalizability of the inferred graph structure by transferring the pre-trained graph to an independent dataset for the same task. Our results demonstrate the task-robustness of the graph against different scanning parameters and demographics.

MRI image registration considerably improves CNN-based disease classification

Malte Klingenberg (Bernstein Center for Computational Neuroscience); Didem Stark (Charite – Universitätsmedizin Berlin)*; Fabian Eitel (Bernstein Center for Computational Neuroscience); Kerstin Ritter (Charité – Universitätsmedizin Berlin)

Abstract: Machine learning methods have many promising applications in medical imaging, including the diagnosis of Alzheimer’s Disease (AD) based on magnetic resonance imaging (MRI) brain scans. These scans usually undergo several preprocessing steps, including image registration. However, the effect of image registration methods on the performance of the machine learning classifier is poorly understood. In this study, we train a convolutional neural network (CNN) to detect AD on a dataset preprocessed in three different ways. The scans were registered to a template either linearly or nonlinearly, or were only padded and cropped to the needed size without performing image registration. We show that both linear and non-linear registration increase the balanced accuracy of the classifier significantly by around 6-7% in comparison to no registration. No significant difference between linear and non-linear registration was found. The dataset split, although carefully matched for age and sex, affects the classifier performance strongly, suggesting that some subjects are easier to classify than others, possibly due to different clinical manifestations of AD and varying rates of disease progression. In conclusion, we show that for a CNN detecting AD, a prior image registration improves the classifier performance, but the choice of a linear or nonlinear registration method has only little impact on the classification accuracy and can be made based on other constraints such as computational resources or planned further analyses like the use of brain atlases.

Detection of abnormal folding patterns with unsupervised deep generative models

Louise Guillon (NeuroSpin, CEA Saclay)*; Bastien Cagna (NeuroSpin, CEA Saclay); Benoit Dufumier (NeuroSpin, CEA Saclay); Joël Chavas (NeuroSpin, CEA Saclay); Denis Rivière (Neurospin, CEA Saclay); Jean-François Mangin (Neurospin, CEA Saclay)

Abstract: Although the main structures of cortical folding are present in each human brain, the folding pattern is unique to each individual. Because of this large normal variability, the identification of abnormal patterns associated to developmental disorders is a complex open challenge. In this paper, we tackle this problem as an anomaly detection task and explore the potential of deep generative models using benchmarks made up of synthetic anomalies. To focus learning on the folding geometry, brain MRI are preprocessed first to deal only with a skeleton-based negative cast of the cortex. A variational auto-encoder is trained to get a representation of the regional variability of the folding pattern of the general population. Then several synthetic benchmark datasets of abnormalities are designed. The latent space expressivity is assessed through classification experiments between control’s and abnormal’s latent codes. Finally, the properties encoded in the latent space are analyzed through perturbation of specific latent dimensions and observation of the resulting modification of the reconstructed images. The results have shown that the latent representation is rich enough to distinguish subtle differences like asymmetries between the right and left hemispheres.

Deep Stacking Networks for Conditional Nonlinear Granger Causal Modeling of fMRI Data

Kai-Cheng Chuang (Louisiana State University)*; Sreekrishna Ramakrishnapillai (Pennington Biomedical Research Center); Lydia Bazzano (Tulane University); Owen Carmichael (Pennington Biomedical Research Center)

Abstract: Conditional Granger causality, based on functional magnetic resonance imaging (fMRI) time series signals, is the quantification of how strongly brain activity in a certain source brain region contributes to brain activity in a target brain region, independent of the contributions of other source regions. Current methods to solve this problem are either unable to model nonlinear relation-ships between source and target signals, unable to efficiently quantify time lags in source-target relationships, or require ad hoc parameter settings and post hoc calculations to assess conditional Granger causality. This paper proposes the use of deep stacking networks, with dilated convolutional neural networks (CNNs) as component parts, to address these challenges. The dilated CNNs nonlinearly model the target signal as a function of source signals. Conditional Granger causality is assessed in terms of how much modeling fidelity increases when additional dilated CNNs are added to the model. Time lags between source and target signals are estimated by analyzing estimated dilated CNN parameters. Our technique successfully estimated conditional Granger causality, did not spuriously identify false causal relationships, and correctly estimated time lags when applied to synthetic datasets and data generated by the STANCE fMRI simulator. When applied to real-world task fMRI data from an epidemiological cohort, the method identified biologically plausible causal relationships among regions known to be task-engaged and provided new information about causal structure among sources and targets that traditional single-source causal modeling could not provide. The proposed method is promising for modeling complex Granger causal relationships within brain networks.

Constrained Learning of Task-related and Spatially-Coherent Dictionaries from Task fMRI Data

Sreekrishna Ramakrishnapillai (Louisiana State University)*; Harris R. Lieberman (US Army Research Institute of Environmental Medicine); Jennifer C. Rood (Pennington Biomedical Research Center); Stefan M. Pasiakos (US Army Research Institute of Environmental Medicine); Kori Murray (Pennington Biomedical Research Center); Preetham Shankpal (GE Healthcare); Owen Carmichael (Pennington Biomedical Research Center)

Abstract: Dictionary learning and sparse coding techniques overcome limitations of traditional voxel-level analyses of task-based functional magnetic resonance imaging (fMRI) data by identifying broader temporal and spatial patterns of brain activity. However, prior applications of these methods to task-related fMRI data are not simultaneously optimized to find temporal patterns of ac-tivity that change in concert with changes in task conditions and spatial pat-terns that leverage existing neuroscience knowledge. In this study we present a new sparse dictionary learning method that uses prior knowledge of the temporal pattern of task conditions and the locations of brain regions hy-pothesized to be involved in the task to decompose fMRI data into temporal patterns of signals that loosely differ between task conditions and sparse spa-tial patterns that are at least partially similar to known functional network hubs. An efficient on-line optimization framework identifies the temporal and spatial patterns. The method identifies spatial and temporal patterns programmed into synthetic task fMRI data. The proposed method also identi-fies spatial locations known a priori to be activated by the Attention Net-work Task (ANT) more completely than competing methods when applied to real fMRI data from 20 healthy young individuals aged 18 to 39 years. Sim-ultaneously leveraging the known temporal structure of the task and biasing solutions towards hypothesized network hubs increases the usefulness of sparse dictionary learning methods applied to task fMRI data.

Structure-Function Mapping via Graph Neural Networks

Yang Ji (Université Côte d’Azur, Inria)*; Samuel Deslauriers-Gauthier (Université Côte d’Azur, Inria); Rachid Deriche (Inria)

Abstract: Understanding the mapping between structural and functional brain connectivity is essential for understanding how cognitive processes emerge from their morphological substrates. Many studies have investigated the problem from an eigendecomposition viewpoint, however, few have taken a deep learning viewpoint, even less studies have been engaged within the framework of graph neural networks (GNNs). As deep learning has produced significant results in several fields, there has been an increasing interest in applying neural networks to graph problems. In this paper, we investigate the structural connectivity and functional connectivity mapping within a deep learning GNNs based framework, including graph convolutional networks (GCN) and graph transformer networks (GTN). To our knowledge, this original GTN based framework has never been studied in the context of structure-function and brain connectivity mapping. To achieve this goal, we use a GNNs based encoder-decoder system, where the encoder takes structural connectivity (SC) matrix as input and generates a latent representation of each node in a lower dimension, then the decoder uses the latent representation to reconstruct or predict the associated functional connectivity (FC) matrix. Besides comparing different encoders for node embedding, we also demonstrate that a decoder, which projects lower dimension vectors onto higher dimensional space, can improve the model performance. Our experiments demonstrate that both GCN encoder and GTN encoder combined with the proposed decoder can provide better results on our data than the previously proposed GCN autoencoder model. GTN encoder is also shown to be much more effective when it comes to noisy data and outliers.

Multi-Modal Brain Segmentation Using Hyper-Fused Convolutional Neural Network

Wenting Duan (University of Lincoln)*; Lei Zhang (University of Lincoln); Jordan Colman (Ashford and St Peter’s Hospitals NHS Foundation Trust); Giosue Gulli (NHS); Xujiong Ye (University of Lincoln)

Abstract: Algorithms for fusing information acquired from different imaging modalities have shown to improve the segmentation results of various applications in the medical field. Motivated by recent successes achieved using densely connected fusion networks, we propose a new fusion architecture for the purpose of 3D segmentation in multi-modal brain MRI volumes. Based on a hyper-densely connected convolutional neural network, our network features in promoting a progressive information abstraction process, introducing a new module – ResFuse to merge and normalize features from different modalities and adopting combo loss for handing data imbalances. The proposed approach is evaluated on both an outsourced dataset for acute ischemic stroke lesion segmentation and a public dataset for infant brain segmentation (iSeg-17). The experiment results show our approach achieves superior performances for both datasets compared to the state-of-art fusion network.

Patch vs. global image-based unsupervised anomaly detection in MR brain scans of early Parkinsonian patients

Veronica Munoz-Ramirez (Université Grenoble Alpes); Nicolas E Pinon (CREATIS); Florence Forbes (Inria); Carole Lartizien (CREATIS); Michel Dojat (Inserm)*

Abstract: Although neural networks have proven very successful in a number of medical image analysis applications, their use remains difficult when targeting subtle tasks such as the identification of barely visible brain lesions, especially given the lack of annotated datasets. Good candidate approaches are patch-based unsupervised pipelines which have both the advantage to increase the number of input data and to capture local and fine anomaly patterns distributed in the image, while potential inconveniences are the loss of global structural information. We illustrate this trade-off on Parkinson’s disease (PD) anomaly detection comparing the performance of two anomaly detection models based on a spatial auto-encoder (AE) and an adaptation of a patch-fed siamese auto-encoder (SAE). On average, the SAE model performs better, showing that patches may indeed be advantageous.

PialNN: A Fast Deep Learning Framework for Cortical Pial Surface Reconstruction

Qiang Ma (Imperial College London)*; Emma C Robinson (King’s College); Bernhard Kainz (Imperial College London, FAU Erlangen-Nürnberg); Daniel Rueckert (Imperial College London); Amir Alansary (Imperial College London)

Abstract: Traditional cortical surface reconstruction is time consuming and limited by the resolution of brain Magnetic Resonance Imaging (MRI). In this work, we introduce Pial Neural Network (PialNN), a 3D deep learning framework for pial surface reconstruction. PialNN is trained end-to-end to deform an initial white matter surface to a target pial surface by a sequence of learned deformation blocks. A local convolutional operation is incorporated in each block to capture the multi-scale MRI information of each vertex and its neighborhood. This is fast and memory-efficient, which allows reconstructing a pial surface mesh with 150k vertices in one second. The performance is evaluated on the Human Connectome Project (HCP) dataset including T1-weighted MRI scans of 300 subjects. The experimental results demonstrate that PialNN reduces the geometric error of the predicted pial surface by 30% compared to state-of-the-art deep learning approaches.

Dynamic Sub-graph Learning for Patch-based Cortical Folding Classification

Zhiwei Deng (University of Southern California); Jiong Zhang (“Laboratory of Neural Imaging, University of Southern California”); Yonggang Shi (University of Southern California)*

Abstract: Surface mapping techniques have been commonly used for the alignment of cortical anatomy and the detection of gray matter thickness changes in Alzheimer’s disease (AD) imaging research. Two major hurdles exist in further advancing the accuracy in cortical analysis. First, high variability in the topological arrangement of gyral folding patterns makes it very likely that sucal area in one brain will be mapped to the gyral area of another brain. Second, the considerable differences in the thickness distribution of the sulcal and gyral area will greatly reduce the power in atrophy detection if misaligned. To overcome these challenges, it will be desirable to identify brains with cortical regions sharing similar folding patterns and perform anatomically more meaningful atrophy detection. To this end, we propose a patch-based classification method of folding patterns by developing a novel graph convolutional neural network (GCN). We focus on the classification of the precuneus region in this work because it is one of the early cortical regions affected by AD and considered to have three major folding patterns. Compared to previous GCN-based methods, the main novelty of our model is the dynamic learning of sub-graphs for each vertex of a surface patch based on distances in the feature space. Our proposed network dynamically updates the vertex feature representation without overly smoothing the local folding structures. In our experiments, we use a large-scale dataset with 980 precuneus patches and demonstrate that our method outperforms five other neural network models in classifying precuneus folding patterns.

H3K27M Mutations Prediction for Brainstem Gliomas Based on Diffusion Radiomics Learning

Ne Yang (Tsinghua University); Xiong Xiao (Beijing Tiantan Hospital); Xianyu Wang (Tsinghua University); Guocan Gu (Capital Medical University); Liwei Zhang (Beijing Tiantan Hospital); Hongen Liao (Tsinghua University)*

Abstract: H3K27M mutation is the most common mutation in brainstem gliomas (BSGs), which is related with highly invasive neoplasms and poor progno-sis. Accurate presurgical and noninvasive prediction of H3K27M muta-tions based on preoperative multi-modal neuroimaging is of great clinical value in the diagnosis, prognosis and therapeutic selection of BSGs. Tradi-tional BSG radiomics models usually only focus on tumor local morpho-metric characteristics. However, given that highly invasive BSGs may sig-nificantly affect large-scale brain network connectivity, we reasonably infer that local radiomics and global connectomics may provide different per-spectives for H3K27M genotype prediction. Therefore, we define a graph-based diffusion radiomics learning model to integrate these two kinds of features seamlessly. Specifically, edges of the defined brain network are de-termined by neural fiber connections, while node features of brainstem are governed by local tumor radiomics. Upon this model, we further propose a multi-mechanism diffusion convolutional network to couple multi-modal information and generate a joint representation for brain disease diagnosis. By graph diffusion convolution, the local radiomics information spread along the brain network structure to enhance graph representation learning, and eventually the learned diffusion radiomics features contribute to disease prediction. Experiments on a real BSG dataset demonstrate the effectiveness and advantages of our proposed method for preoperative pre-diction of H3K27M statuses.

Distinguishing Healthy Ageing from Dementia: a Biomechanical Simulation of Brain Atrophy using Deep Networks

Mariana da Silva (King’s College London)*; Carole Sudre (); Kara Garcia (Indiana University ); Cher Bass (King’s College London); Jorge Cardoso (Kings College London); Emma C Robinson (King’s College)

Abstract: Biomechanical modeling of tissue deformation can be used to simulate different scenarios of longitudinal brain evolution. In this work, we present a deep learning framework for hyper-elastic strain modelling of brain atrophy, during healthy ageing and in Alzheimer’s Disease. The framework directly models the effects of age, disease status, and scan interval to regress regional patterns of atrophy, from which a strain-based model estimates deformations. This model is trained and validated using 3D structural magnetic resonance imaging data from the ADNI cohort. Results show that the framework can estimate realistic deformations, following the known course of Alzheimer’s disease, that clearly differentiate between healthy and demented patterns of ageing. This suggests the framework has potential to be incorporated into explainable models of disease, for the exploration of interventions and counterfactual examples.

Geometric Deep Learning of the Human Connectome Project Multimodal Cortical Parcellation

Logan ZJ Williams (King’s College London)*; Abdulah Fawaz (King’s College London); Matthew Glasser (Washington University, St. Louis); A. David Edwards (King’s College London); Emma C Robinson (King’s College)

Abstract: Understanding the topographic heterogeneity of cortical organisation is an essential step towards precision modelling of neuropsychiatric disorders. While many cortical parcellation schemes have been proposed, few attempt to model inter-subject variability. For those that do, most have been proposed for high-resolution research quality data, without exploration of how well they generalise to clinical quality scans. In this paper, we benchmark and ensemble four different geometric deep learning models on the task of learning the Human Connectome Project (HCP) multimodal cortical parcellation. We employ Monte Carlo dropout to investigate model uncertainty with a view to propagate these labels to new datasets. Models achieved an overall Dice overlap ratio of >0.85 +/- 0.02. Regions with the highest mean and lowest variance included V1 and areas within the parietal lobe, and regions with the lowest mean and high variance included areas within the medial frontal lobe, lateral occipital pole and insula. Qualitatively, our results suggest that more work is needed before geometric deep learning methods are capable of fully capturing atypical cortical topographies such as those seen in area 55b. However, information about topographic variability between participants was encoded in vertex-wise uncertainty maps, suggesting a potential avenue for projection of this multimodal parcellation to new datasets with limited functional MRI, such as the UK Biobank.

Unfolding the medial temporal lobe cortex to characterize neurodegeneration due to Alzheimer’s disease pathology using ex vivo imaging

Sadhana Ravikumar (University of Pennsylvania)*; Laura Wisse (Lund University); Sydney Lim (University of Pennsylvania); David Irwin (University of Pennsylvania); Ranjit Ittyerah (University of Pennsylvania); Long Xie (University of Pennsylvania); Sandhitsu R. Das (University of Pennsylvania); Edward Lee (University of Pennsylvania); M. Dylan Tisdall (University of Pennsylvania); Karthik Prabhakaran (University of Pennsylvania); John Detre (University of Pennsylvania); Gabor Mizsei (University of Pennsylvania); John Q. Trojanowski (University of Pennsylvania); John Robinson (University of Pennsylvania); Theresa Schuck (University of Pennsylvania); Murray Grossman (University of Pennsylvania); Emilio Artacho-Pérula (University of Castilla La Mancha); Maria Mercedes Iñiguez de Onzoño Martin (University of Castilla La Mancha); María del Mar Arroyo Jiménez (University of Castilla La Mancha); Monica Muñoz (University of Castilla La Mancha); Francisco Javier Molina Romero (University of Castilla La Mancha); Maria del Pilar Marcos Rabal (University of Castilla La Mancha); Sandra Cebada Sánchez (University of Castilla La Mancha); José Carlos Delgado González (University of Castilla La Mancha); Carlos de la Rosa Prieto (University of Castilla La Mancha); Marta Córcoles Parada ( University of Castilla La Mancha); David Wolk (University of Pennsylvania); Ricardo Insausti ( University of Castilla La Mancha); Paul Yushkevich (University of Pennsylvania)

Abstract: Neurofibrillary tangle (NFT) pathology in the medial temporal lobe (MTL) is closely linked to neurodegeneration, and is the early pathological change associated with Alzheimer’s Disease (AD). In this work, we investigate the relationship between MTL morphometry features derived from high-resolution ex vivo imaging and histology-based measures of NFT pathology using a topological unfolding framework applied to a dataset of 18 human postmortem MTL specimens. The MTL has a complex 3D topography and exhibits a high degree of inter-subject variability in cortical folding pat-terns which poses a significant challenge for volumetric registration methods typically used during MRI template construction. By unfolding the MTL cortex, the proposed framework explicitly accounts for the sheet-like geometry of the MTL cortex and provides a two-dimensional reference co-ordinate space which can be used to implicitly register cortical folding pat-terns across specimens based on distance along the cortex despite large anatomical variability. Leveraging this framework in a subset of 15 specimens, we characterize the associations between NFTs and morphological features such as cortical thickness and surface curvature and identify regions in the MTL where patterns of atrophy are strongly correlated with NFT pathology.

Improving Phenotype Prediction using Long-Range Spatio-Temporal Dynamics of Functional Connectivity

Simon Dahan (King’s College London)*; Logan ZJ Williams (King’s College London); Daniel Rueckert (Imperial College London); Emma C Robinson (King’s College)

Abstract: The study of functional brain connectivity (FC) is important for understanding the underlying mechanisms of many psychiatric disorders. Many recent analyses adopt graph convolutional networks, to study non-linear interactions between functionally-correlated states. However, although patterns of brain activation are known to be hierarchically organised in both space and time, many methods have failed to extract powerful spatio-temporal features. To overcome those challenges, and improve understanding of long-range functional dynamics, we translate an approach, from the domain of skeleton-based action recognition, designed to model interactions across space and time. We evaluate this approach using the Human Connectome Project (HCP) dataset on sex classification and fluid intelligence prediction. To account for subject topographic variability of functional organisation, we modelled functional connectomes using multi-resolution dual-regressed (subject-specific) ICA nodes. Results show a prediction accuracy of 94.4% for sex classification (an increase of 6.2% compared to other methods), and an improvement of correlation with fluid intelligence of 0.325 vs 0.144, relative to a baseline model that encodes space and time separately. Results suggest that explicit encoding of spatio-temporal dynamics of brain functional activity may improve the precision with which behavioural and cognitive phenotypes may be predicted in the future.

Donders Talent Award (DTA) 2021

We are glad to announce the call for MLCN2021’s ‘Donders Talent Award’ (DTA). DTA is sponsored by Donders Institute and aims at supporting talented students from developing countries with interest and proven talent in the area of machine learning in clinical neuroimaging. The Donders Institute is a research centre devoted to understanding human cognition and behavior in health and disease. Hundreds of international researchers aim at the advancement of brain-, cognitive and behavioral science and improving health, education, and technology.

DTA covers the MLCN2021 workshop registration fee that is held this year in a virtual format. Eligible candidates are:

  • Bachelor, Master, or Ph.D. students studying in developing countries (see here for the list of eligible countries)
  • Proven record of study and research in the machine learning in clinical neuroimaging domain

The application form is available at this link. The applications that are submitted before the 20th of August 2021 will be considered and evaluated by the MLCN 2021 organizing committee.

For more information and questions please contact Dr. Vinod Kumar ( or Dr. Thomas Wolfers (

Donders Institute will sponsor MLCN 2021

We are delighted to announce that the MLCN 2021 will be sponsored by Donders Institute. The Donders Institute is a research centre devoted to understanding human cognition and behavior in health and disease. Hundreds of international researchers aim at the advancement of brain, cognitive and behavioral science and improving health, education and technology.

According to our agreement, Donders Institute will generously cover i) the MLCN2021’s best paper award, Donders best paper award (see here for more details); ii) registration costs of our invited keynote speakers (see here for more details); and iii) registration fees for talented students from developing countries, Donders talent award (you can apply here).