Lifelong Learning from Event-based Data
- URL: http://arxiv.org/abs/2111.08458v1
- Date: Thu, 11 Nov 2021 17:59:41 GMT
- Title: Lifelong Learning from Event-based Data
- Authors: Vadym Gryshchuk, Cornelius Weber, Chu Kiong Loo, Stefan Wermter
- Abstract summary: We investigate methods for learning from data produced by event cameras.
We propose a model that is composed of both, feature extraction and continuous learning.
- Score: 22.65311698505554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lifelong learning is a long-standing aim for artificial agents that act in
dynamic environments, in which an agent needs to accumulate knowledge
incrementally without forgetting previously learned representations. We
investigate methods for learning from data produced by event cameras and
compare techniques to mitigate forgetting while learning incrementally. We
propose a model that is composed of both, feature extraction and continuous
learning. Furthermore, we introduce a habituation-based method to mitigate
forgetting. Our experimental results show that the combination of different
techniques can help to avoid catastrophic forgetting while learning
incrementally from the features provided by the extraction module.
Related papers
- RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - VERSE: Virtual-Gradient Aware Streaming Lifelong Learning with Anytime
Inference [36.61783715563126]
Streaming lifelong learning is a challenging setting of lifelong learning with the goal of continuous learning without forgetting.
We introduce a novel approach to lifelong learning, which is streaming (observes each training example only once)
We propose a novel emphvirtual gradients based approach for continual representation learning which adapts to each new example while also generalizing well on past data to prevent catastrophic forgetting.
arXiv Detail & Related papers (2023-09-15T07:54:49Z) - Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - Advancing continual lifelong learning in neural information retrieval: definition, dataset, framework, and empirical evaluation [3.2340528215722553]
A systematic task formulation of continual neural information retrieval is presented.
A comprehensive continual neural information retrieval framework is proposed.
Empirical evaluations illustrate that the proposed framework can successfully prevent catastrophic forgetting in neural information retrieval.
arXiv Detail & Related papers (2023-08-16T14:01:25Z) - SeMAIL: Eliminating Distractors in Visual Imitation via Separated Models [22.472167814814448]
We propose a new model-based imitation learning algorithm named Separated Model-based Adversarial Imitation Learning (SeMAIL)
Our method achieves near-expert performance on various visual control tasks with complex observations and the more challenging tasks with different backgrounds from expert observations.
arXiv Detail & Related papers (2023-06-19T04:33:44Z) - Accelerating exploration and representation learning with offline
pre-training [52.6912479800592]
We show that exploration and representation learning can be improved by separately learning two different models from a single offline dataset.
We show that learning a state representation using noise-contrastive estimation and a model of auxiliary reward can significantly improve the sample efficiency on the challenging NetHack benchmark.
arXiv Detail & Related papers (2023-03-31T18:03:30Z) - Continual Learning with Bayesian Model based on a Fixed Pre-trained
Feature Extractor [55.9023096444383]
Current deep learning models are characterised by catastrophic forgetting of old knowledge when learning new classes.
Inspired by the process of learning new knowledge in human brains, we propose a Bayesian generative model for continual learning.
arXiv Detail & Related papers (2022-04-28T08:41:51Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Meta-Learning with Sparse Experience Replay for Lifelong Language
Learning [26.296412053816233]
We propose a novel approach to lifelong learning of language tasks based on meta-learning with sparse experience replay.
We show that under the realistic setting of performing a single pass on a stream of tasks, our method obtains state-of-the-art results on lifelong text classification and relation extraction.
arXiv Detail & Related papers (2020-09-10T14:36:38Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.