Secure & Private Federated Neuroimaging
- URL: http://arxiv.org/abs/2205.05249v2
- Date: Mon, 28 Aug 2023 13:00:38 GMT
- Title: Secure & Private Federated Neuroimaging
- Authors: Dimitris Stripelis, Umang Gupta, Hamza Saleem, Nikhil Dhinagar, Tanmay
Ghai, Rafael Chrysovalantis Anastasiou, Armaghan Asghar, Greg Ver Steeg,
Srivatsan Ravi, Muhammad Naveed, Paul M. Thompson, Jose Luis Ambite
- Abstract summary: Federated Learning enables distributed training of neural network models over multiple data sources without sharing data.
Each site trains the neural network over its private data for some time, then shares the neural network parameters with a Federation Controller.
Our Federated Learning architecture, MetisFL, provides strong security and privacy.
- Score: 17.946206585229675
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The amount of biomedical data continues to grow rapidly. However, collecting
data from multiple sites for joint analysis remains challenging due to
security, privacy, and regulatory concerns. To overcome this challenge, we use
Federated Learning, which enables distributed training of neural network models
over multiple data sources without sharing data. Each site trains the neural
network over its private data for some time, then shares the neural network
parameters (i.e., weights, gradients) with a Federation Controller, which in
turn aggregates the local models, sends the resulting community model back to
each site, and the process repeats. Our Federated Learning architecture,
MetisFL, provides strong security and privacy. First, sample data never leaves
a site. Second, neural network parameters are encrypted before transmission and
the global neural model is computed under fully-homomorphic encryption.
Finally, we use information-theoretic methods to limit information leakage from
the neural model to prevent a curious site from performing model inversion or
membership attacks. We present a thorough evaluation of the performance of
secure, private federated learning in neuroimaging tasks, including for
predicting Alzheimer's disease and estimating BrainAGE from magnetic resonance
imaging (MRI) studies, in challenging, heterogeneous federated environments
where sites have different amounts of data and statistical distributions.
Related papers
- Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - Preserving Specificity in Federated Graph Learning for fMRI-based
Neurological Disorder Identification [31.668499876984487]
We propose a specificity-aware graph learning framework for rs-fMRI analysis and automated brain disorder identification.
At each client, our model consists of a shared and a personalized branch, where parameters of the shared branch are sent to the server while those of the personalized branch remain local.
Experimental results on two fMRI datasets with a total of 1,218 subjects suggest SFGL outperforms several state-of-the-art approaches.
arXiv Detail & Related papers (2023-08-20T15:55:45Z) - Data-Driven Network Neuroscience: On Data Collection and Benchmark [6.796086914275059]
This paper presents a collection of functional human brain network data for potential research in the intersection of neuroscience, machine learning, and graph analytics.
The datasets originate from 6 different sources, cover 4 brain conditions, and consist of a total of 2,702 subjects.
arXiv Detail & Related papers (2022-11-11T02:14:28Z) - Measuring Unintended Memorisation of Unique Private Features in Neural
Networks [15.174895411434026]
We show that neural networks unintentionally memorise unique features even when they occur only once in training data.
An example of a unique feature is a person's name that is accidentally present on a training image.
arXiv Detail & Related papers (2022-02-16T14:39:05Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Reducing Catastrophic Forgetting in Self Organizing Maps with
Internally-Induced Generative Replay [67.50637511633212]
A lifelong learning agent is able to continually learn from potentially infinite streams of pattern sensory data.
One major historic difficulty in building agents that adapt is that neural systems struggle to retain previously-acquired knowledge when learning from new samples.
This problem is known as catastrophic forgetting (interference) and remains an unsolved problem in the domain of machine learning to this day.
arXiv Detail & Related papers (2021-12-09T07:11:14Z) - Complex-valued Federated Learning with Differential Privacy and MRI Applications [51.34714485616763]
We introduce the complex-valued Gaussian mechanism, whose behaviour we characterise in terms of $f$-DP, $(varepsilon, delta)$-DP and R'enyi-DP.
We present novel complex-valued neural network primitives compatible with DP.
Experimentally, we showcase a proof-of-concept by training federated complex-valued neural networks with DP on a real-world task.
arXiv Detail & Related papers (2021-10-07T14:03:00Z) - NeuraCrypt: Hiding Private Health Data via Random Neural Networks for
Public Training [64.54200987493573]
We propose NeuraCrypt, a private encoding scheme based on random deep neural networks.
NeuraCrypt encodes raw patient data using a randomly constructed neural network known only to the data-owner.
We show that NeuraCrypt achieves competitive accuracy to non-private baselines on a variety of x-ray tasks.
arXiv Detail & Related papers (2021-06-04T13:42:21Z) - Scaling Neuroscience Research using Federated Learning [1.2234742322758416]
Machine learning approaches that require data to be copied to a single location are hampered by the challenges of data sharing.
Federated Learning is a promising approach to learn a joint model over data silos.
This architecture does not share any subject data across sites, only aggregated parameters, often in encrypted environments.
arXiv Detail & Related papers (2021-02-16T20:30:04Z) - Multi-site fMRI Analysis Using Privacy-preserving Federated Learning and
Domain Adaptation: ABIDE Results [13.615292855384729]
To train a high-quality deep learning model, the aggregation of a significant amount of patient information is required.
Due to the need to protect the privacy of patient data, it is hard to assemble a central database from multiple institutions.
Federated learning allows for population-level models to be trained without centralizing entities' data.
arXiv Detail & Related papers (2020-01-16T04:49:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.