Incremental Real-Time Personalization in Human Activity Recognition
Using Domain Adaptive Batch Normalization
- URL: http://arxiv.org/abs/2005.12178v2
- Date: Mon, 21 Dec 2020 14:13:48 GMT
- Title: Incremental Real-Time Personalization in Human Activity Recognition
Using Domain Adaptive Batch Normalization
- Authors: Alan Mazankiewicz, Klemens B\"ohm, Mario Berg\'es
- Abstract summary: Human Activity Recognition (HAR) from devices like smartphone accelerometers is a fundamental problem in ubiquitous computing.
Previous work has addressed this challenge by personalizing general recognition models to the unique motion pattern of a new user in a static batch setting.
Our work addresses all of these challenges by proposing an unsupervised online domain adaptation algorithm.
- Score: 1.160208922584163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human Activity Recognition (HAR) from devices like smartphone accelerometers
is a fundamental problem in ubiquitous computing. Machine learning based
recognition models often perform poorly when applied to new users that were not
part of the training data. Previous work has addressed this challenge by
personalizing general recognition models to the unique motion pattern of a new
user in a static batch setting. They require target user data to be available
upfront. The more challenging online setting has received less attention. No
samples from the target user are available in advance, but they arrive
sequentially. Additionally, the motion pattern of users may change over time.
Thus, adapting to new and forgetting old information must be traded off.
Finally, the target user should not have to do any work to use the recognition
system by, say, labeling any activities. Our work addresses all of these
challenges by proposing an unsupervised online domain adaptation algorithm.
Both classification and personalization happen continuously and incrementally
in real time. Our solution works by aligning the feature distributions of all
subjects, be they sources or the target, in hidden neural network layers. To
this end, we normalize the input of a layer with user-specific mean and
variance statistics. During training, these statistics are computed over
user-specific batches. In the online phase, they are estimated incrementally
for any new target user.
Related papers
- ELF-UA: Efficient Label-Free User Adaptation in Gaze Estimation [14.265464822002924]
Our goal is to provide a personalized gaze estimation model specifically adapted to a target user.
Previous work requires some labeled images of the target person data to fine-tune the model at test time.
Our proposed method uses a meta-learning approach to learn how to adapt to a new user with only a few unlabeled images.
arXiv Detail & Related papers (2024-06-13T13:00:33Z) - Using Motion Forecasting for Behavior-Based Virtual Reality (VR)
Authentication [8.552737863305213]
We present the first approach that predicts future user behavior using Transformer-based forecasting and using the forecasted trajectory to perform user authentication.
Our approach reduces the authentication equal error rate (EER) by an average of 23.85% and a maximum reduction of 36.14%.
arXiv Detail & Related papers (2024-01-30T00:43:41Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z) - Reducing Impacts of System Heterogeneity in Federated Learning using
Weight Update Magnitudes [0.0]
Federated learning enables machine learning models to train locally on each handheld device while only synchronizing their neuron updates with a server.
This results in the training time of federated learning tasks being dictated by a few low-performance straggler devices.
In this work, we aim to mitigate the performance bottleneck of federated learning by dynamically forming sub-models for stragglers based on their performance and accuracy feedback.
arXiv Detail & Related papers (2022-08-30T00:39:06Z) - Continual Test-Time Domain Adaptation [94.51284735268597]
Test-time domain adaptation aims to adapt a source pre-trained model to a target domain without using any source data.
CoTTA is easy to implement and can be readily incorporated in off-the-shelf pre-trained models.
arXiv Detail & Related papers (2022-03-25T11:42:02Z) - Pedestrian Detection: Domain Generalization, CNNs, Transformers and
Beyond [82.37430109152383]
We show that, current pedestrian detectors poorly handle even small domain shifts in cross-dataset evaluation.
We attribute the limited generalization to two main factors, the method and the current sources of data.
We propose a progressive fine-tuning strategy which improves generalization.
arXiv Detail & Related papers (2022-01-10T06:00:26Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Transfer Learning for Human Activity Recognition using Representational
Analysis of Neural Networks [0.5898893619901381]
We propose a transfer learning framework for human activity recognition.
We show up to 43% accuracy improvement and 66% training time reduction when compared to the baseline without using transfer learning.
arXiv Detail & Related papers (2020-12-05T01:35:11Z) - Into the Unknown: Active Monitoring of Neural Networks [9.591060426695748]
We introduce an algorithmic framework for active monitoring of a neural network.
A monitor wrapped in our framework operates in parallel with the neural network and interacts with a human user.
An experimental evaluation on a diverse set of benchmarks confirms the benefits of our active monitoring framework in dynamic scenarios.
arXiv Detail & Related papers (2020-09-14T13:29:47Z) - Understanding Self-Training for Gradual Domain Adaptation [107.37869221297687]
We consider gradual domain adaptation, where the goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.
We prove the first non-vacuous upper bound on the error of self-training with gradual shifts, under settings where directly adapting to the target domain can result in unbounded error.
The theoretical analysis leads to algorithmic insights, highlighting that regularization and label sharpening are essential even when we have infinite data, and suggesting that self-training works particularly well for shifts with small Wasserstein-infinity distance.
arXiv Detail & Related papers (2020-02-26T08:59:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.