Knowing when we do not know: Bayesian continual learning for
sensing-based analysis tasks
- URL: http://arxiv.org/abs/2106.05872v1
- Date: Sun, 6 Jun 2021 13:45:06 GMT
- Title: Knowing when we do not know: Bayesian continual learning for
sensing-based analysis tasks
- Authors: Sandra Servia-Rodriguez, Cecilia Mascolo and Young D. Kwon
- Abstract summary: We propose a Bayesian inference based framework to continually learn a set of real-world, sensing-based analysis tasks.
Our experiments prove the robustness and reliability of the learned models to adapt to the changing sensing environment.
- Score: 8.201216572526302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite much research targeted at enabling conventional machine learning
models to continually learn tasks and data distributions sequentially without
forgetting the knowledge acquired, little effort has been devoted to account
for more realistic situations where learning some tasks accurately might be
more critical than forgetting previous ones. In this paper we propose a
Bayesian inference based framework to continually learn a set of real-world,
sensing-based analysis tasks that can be tuned to prioritize the remembering of
previously learned tasks or the learning of new ones. Our experiments prove the
robustness and reliability of the learned models to adapt to the changing
sensing environment, and show the suitability of using uncertainty of the
predictions to assess their reliability.
Related papers
- When Is Prior Knowledge Helpful? Exploring the Evaluation and Selection of Unsupervised Pretext Tasks from a Neuro-Symbolic Perspective [45.419765404078724]
We extend the Nesy theory based on reliable knowledge to the scenario of unreliable knowledge.<n>We propose schemes to operationalize these theoretical metrics, and thereby develop a method that can predict the effectiveness of pretext tasks in advance.
arXiv Detail & Related papers (2025-08-10T11:23:36Z) - Spurious Forgetting in Continual Learning of Language Models [20.0936011355535]
Recent advancements in large language models (LLMs) reveal a perplexing phenomenon in continual learning.
Despite extensive training, models experience significant performance declines.
This study proposes that such performance drops often reflect a decline in task alignment rather than true knowledge loss.
arXiv Detail & Related papers (2025-01-23T08:09:54Z) - Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Overcoming Generic Knowledge Loss with Selective Parameter Update [48.240683797965005]
We propose a novel approach to continuously update foundation models.
Instead of updating all parameters equally, we localize the updates to a sparse set of parameters relevant to the task being learned.
Our method achieves improvements on the accuracy of the newly learned tasks up to 7% while preserving the pretraining knowledge with a negligible decrease of 0.9% on a representative control set accuracy.
arXiv Detail & Related papers (2023-08-23T22:55:45Z) - Trust, but Verify: Using Self-Supervised Probing to Improve
Trustworthiness [29.320691367586004]
We introduce a new approach of self-supervised probing, which enables us to check and mitigate the overconfidence issue for a trained model.
We provide a simple yet effective framework, which can be flexibly applied to existing trustworthiness-related methods in a plug-and-play manner.
arXiv Detail & Related papers (2023-02-06T08:57:20Z) - Learning and Retrieval from Prior Data for Skill-based Imitation
Learning [47.59794569496233]
We develop a skill-based imitation learning framework that extracts temporally extended sensorimotor skills from prior data.
We identify several key design choices that significantly improve performance on novel tasks.
arXiv Detail & Related papers (2022-10-20T17:34:59Z) - Relational Experience Replay: Continual Learning by Adaptively Tuning
Task-wise Relationship [54.73817402934303]
We propose Experience Continual Replay (ERR), a bi-level learning framework to adaptively tune task-wise to achieve a better stability plasticity' tradeoff.
ERR can consistently improve the performance of all baselines and surpass current state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T12:05:22Z) - Continual Learning with Neuron Activation Importance [1.7513645771137178]
Continual learning is a concept of online learning with multiple sequential tasks.
One of the critical barriers of continual learning is that a network should learn a new task keeping the knowledge of old tasks without access to any data of the old tasks.
We propose a neuron activation importance-based regularization method for stable continual learning regardless of the order of tasks.
arXiv Detail & Related papers (2021-07-27T08:09:32Z) - Knowledge-driven Data Construction for Zero-shot Evaluation in
Commonsense Question Answering [80.60605604261416]
We propose a novel neuro-symbolic framework for zero-shot question answering across commonsense tasks.
We vary the set of language models, training regimes, knowledge sources, and data generation strategies, and measure their impact across tasks.
We show that, while an individual knowledge graph is better suited for specific tasks, a global knowledge graph brings consistent gains across different tasks.
arXiv Detail & Related papers (2020-11-07T22:52:21Z) - Importance Weighted Policy Learning and Adaptation [89.46467771037054]
We study a complementary approach which is conceptually simple, general, modular and built on top of recent improvements in off-policy learning.
The framework is inspired by ideas from the probabilistic inference literature and combines robust off-policy learning with a behavior prior.
Our approach achieves competitive adaptation performance on hold-out tasks compared to meta reinforcement learning baselines and can scale to complex sparse-reward scenarios.
arXiv Detail & Related papers (2020-09-10T14:16:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.