How to Train Your Differentiable Filter
- URL: http://arxiv.org/abs/2012.14313v1
- Date: Mon, 28 Dec 2020 15:51:07 GMT
- Title: How to Train Your Differentiable Filter
- Authors: Alina Kloss, Georg Martius and Jeannette Bohg
- Abstract summary: We investigate the advantages of differentiable filters over both unstructured learning approaches and manually-tuned filtering algorithms.
Specifically, we evaluate how well complex models of uncertainty can be learned in DFs.
- Score: 23.108005930763586
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In many robotic applications, it is crucial to maintain a belief about the
state of a system, which serves as input for planning and decision making and
provides feedback during task execution. Bayesian Filtering algorithms address
this state estimation problem, but they require models of process dynamics and
sensory observations and the respective noise characteristics of these models.
Recently, multiple works have demonstrated that these models can be learned by
end-to-end training through differentiable versions of recursive filtering
algorithms. In this work, we investigate the advantages of differentiable
filters (DFs) over both unstructured learning approaches and manually-tuned
filtering algorithms, and provide practical guidance to researchers interested
in applying such differentiable filters. For this, we implement DFs with four
different underlying filtering algorithms and compare them in extensive
experiments. Specifically, we (i) evaluate different implementation choices and
training approaches, (ii) investigate how well complex models of uncertainty
can be learned in DFs, (iii) evaluate the effect of end-to-end training through
DFs and (iv) compare the DFs among each other and to unstructured LSTM models.
Related papers
- Inference-Time Alignment in Diffusion Models with Reward-Guided Generation: Tutorial and Review [59.856222854472605]
This tutorial provides an in-depth guide on inference-time guidance and alignment methods for optimizing downstream reward functions in diffusion models.
practical applications in fields such as biology often require sample generation that maximizes specific metrics.
We discuss (1) fine-tuning methods combined with inference-time techniques, (2) inference-time algorithms based on search algorithms such as Monte Carlo tree search, and (3) connections between inference-time algorithms in language models and diffusion models.
arXiv Detail & Related papers (2025-01-16T17:37:35Z) - AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - MUSO: Achieving Exact Machine Unlearning in Over-Parameterized Regimes [19.664090734076712]
Machine unlearning (MU) makes a well-trained model behave as if it had never been trained on specific data.
We propose an alternating optimization algorithm that unifies the tasks of unlearning and relabeling.
The algorithm's effectiveness, confirmed through numerical experiments, highlights its superior performance in unlearning across various scenarios.
arXiv Detail & Related papers (2024-10-11T06:17:17Z) - Learning Optimal Filters Using Variational Inference [0.3749861135832072]
We present a framework for learning a parameterized analysis map - the map that takes a forecast distribution and observations to the filtering distribution.
We show that this methodology can be used to learn gain matrices for filtering linear and nonlinear dynamical systems.
Future work will apply this framework to learn new filtering algorithms.
arXiv Detail & Related papers (2024-06-26T04:51:14Z) - Regime Learning for Differentiable Particle Filters [19.35021771863565]
Differentiable particle filters are an emerging class of models that combine sequential Monte Carlo techniques with the flexibility of neural networks to perform state space inference.
No prior approaches effectively learn both the individual regimes and the switching process simultaneously.
We propose the neural network based regime learning differentiable particle filter (RLPF) to address this problem.
arXiv Detail & Related papers (2024-05-08T07:43:43Z) - Learning Differentiable Particle Filter on the Fly [18.466658684464598]
Differentiable particle filters are an emerging class of sequential Bayesian inference techniques.
We propose an online learning framework for differentiable particle filters so that model parameters can be updated as data arrive.
arXiv Detail & Related papers (2023-12-10T17:54:40Z) - ModelDiff: A Framework for Comparing Learning Algorithms [86.19580801269036]
We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms.
We present ModelDiff, a method that leverages the datamodels framework to compare learning algorithms based on how they use their training data.
arXiv Detail & Related papers (2022-11-22T18:56:52Z) - Deep Variational Models for Collaborative Filtering-based Recommender
Systems [63.995130144110156]
Deep learning provides accurate collaborative filtering models to improve recommender system results.
Our proposed models apply the variational concept to injectity in the latent space of the deep architecture.
Results show the superiority of the proposed approach in scenarios where the variational enrichment exceeds the injected noise effect.
arXiv Detail & Related papers (2021-07-27T08:59:39Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Multitarget Tracking with Transformers [21.81266872964314]
Multitarget Tracking (MTT) is a problem of tracking the states of an unknown number of objects using noisy measurements.
In this paper, we propose a high-performing deep-learning method for MTT based on the Transformer architecture.
arXiv Detail & Related papers (2021-04-01T19:14:55Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.