Privacy-Preserving Object Detection & Localization Using Distributed
Machine Learning: A Case Study of Infant Eyeblink Conditioning
- URL: http://arxiv.org/abs/2010.07259v1
- Date: Wed, 14 Oct 2020 17:33:28 GMT
- Title: Privacy-Preserving Object Detection & Localization Using Distributed
Machine Learning: A Case Study of Infant Eyeblink Conditioning
- Authors: Stefan Zwaard, Henk-Jan Boele, Hani Alers, Christos Strydis, Casey
Lew-Williams, and Zaid Al-Ars
- Abstract summary: We explore scalable distributed-training versions of two algorithms commonly used in object detection.
The application of both algorithms in the medical field is examined using a paradigm from the fields of psychology and neuroscience.
- Score: 1.3022864665437273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Distributed machine learning is becoming a popular model-training method due
to privacy, computational scalability, and bandwidth capacities. In this work,
we explore scalable distributed-training versions of two algorithms commonly
used in object detection. A novel distributed training algorithm using Mean
Weight Matrix Aggregation (MWMA) is proposed for Linear Support Vector Machine
(L-SVM) object detection based in Histogram of Orientated Gradients (HOG). In
addition, a novel Weighted Bin Aggregation (WBA) algorithm is proposed for
distributed training of Ensemble of Regression Trees (ERT) landmark
localization. Both algorithms do not restrict the location of model aggregation
and allow custom architectures for model distribution. For this work, a
Pool-Based Local Training and Aggregation (PBLTA) architecture for both
algorithms is explored. The application of both algorithms in the medical field
is examined using a paradigm from the fields of psychology and neuroscience -
eyeblink conditioning with infants - where models need to be trained on facial
images while protecting participant privacy. Using distributed learning, models
can be trained without sending image data to other nodes. The custom software
has been made available for public use on GitHub:
https://github.com/SLWZwaard/DMT. Results show that the aggregation of models
for the HOG algorithm using MWMA not only preserves the accuracy of the model
but also allows for distributed learning with an accuracy increase of 0.9%
compared with traditional learning. Furthermore, WBA allows for ERT model
aggregation with an accuracy increase of 8% when compared to single-node
models.
Related papers
- Diffusion-Model-Assisted Supervised Learning of Generative Models for
Density Estimation [10.793646707711442]
We present a framework for training generative models for density estimation.
We use the score-based diffusion model to generate labeled data.
Once the labeled data are generated, we can train a simple fully connected neural network to learn the generative model in the supervised manner.
arXiv Detail & Related papers (2023-10-22T23:56:19Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - Cramer Type Distances for Learning Gaussian Mixture Models by Gradient
Descent [0.0]
As of today, few known algorithms can fit or learn Gaussian mixture models.
We propose a distance function called Sliced Cram'er 2-distance for learning general multivariate GMMs.
These features are especially useful for distributional reinforcement learning and Deep Q Networks.
arXiv Detail & Related papers (2023-07-13T13:43:02Z) - Just One Byte (per gradient): A Note on Low-Bandwidth Decentralized
Language Model Finetuning Using Shared Randomness [86.61582747039053]
Language model training in distributed settings is limited by the communication cost of exchanges.
We extend recent work using shared randomness to perform distributed fine-tuning with low bandwidth.
arXiv Detail & Related papers (2023-06-16T17:59:51Z) - Structured Cooperative Learning with Graphical Model Priors [98.53322192624594]
We study how to train personalized models for different tasks on decentralized devices with limited local data.
We propose "Structured Cooperative Learning (SCooL)", in which a cooperation graph across devices is generated by a graphical model.
We evaluate SCooL and compare it with existing decentralized learning methods on an extensive set of benchmarks.
arXiv Detail & Related papers (2023-06-16T02:41:31Z) - VertiBayes: Learning Bayesian network parameters from vertically partitioned data with missing values [2.9707233220536313]
Federated learning makes it possible to train a machine learning model on decentralized data.
We propose a novel method called VertiBayes to train Bayesian networks on vertically partitioned data.
We experimentally show our approach produces models comparable to those learnt using traditional algorithms.
arXiv Detail & Related papers (2022-10-31T11:13:35Z) - Adapting the Mean Teacher for keypoint-based lung registration under
geometric domain shifts [75.51482952586773]
deep neural networks generally require plenty of labeled training data and are vulnerable to domain shifts between training and test data.
We present a novel approach to geometric domain adaptation for image registration, adapting a model from a labeled source to an unlabeled target domain.
Our method consistently improves on the baseline model by 50%/47% while even matching the accuracy of models trained on target data.
arXiv Detail & Related papers (2022-07-01T12:16:42Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - Customized Video QoE Estimation with Algorithm-Agnostic Transfer
Learning [1.452875650827562]
Small datasets, lack of diversity in user profiles in source domain, and too much diversity in target domains of QoE models are challenges for QoE models.
We present a transfer learning-based ML model training approach, which allows decentralized local models to share generic indicators on Mean Opinion Scores (MOS)
We show that the proposed approach is agnostic to specific ML algorithms, stacked upon each other, as it does not necessitate the collaborating localized nodes to run the same ML algorithm.
arXiv Detail & Related papers (2020-03-12T15:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.