Online Learning of a Probabilistic and Adaptive Scene Representation
- URL: http://arxiv.org/abs/2103.16832v1
- Date: Wed, 31 Mar 2021 06:22:05 GMT
- Title: Online Learning of a Probabilistic and Adaptive Scene Representation
- Authors: Zike Yan, Xin Wang, Hongbin Zha
- Abstract summary: We build a consistent scene model on-the-fly for online spatial perception, interpretation, and action.
We experimentally show that the proposed representation achieves state-of-the-art accuracy with promising efficiency.
- Score: 31.02016059126335
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Constructing and maintaining a consistent scene model on-the-fly is the core
task for online spatial perception, interpretation, and action. In this paper,
we represent the scene with a Bayesian nonparametric mixture model, seamlessly
describing per-point occupancy status with a continuous probability density
function. Instead of following the conventional data fusion paradigm, we
address the problem of online learning the process how sequential point cloud
data are generated from the scene geometry. An incremental and parallel
inference is performed to update the parameter space in real-time. We
experimentally show that the proposed representation achieves state-of-the-art
accuracy with promising efficiency. The consistent probabilistic formulation
assures a generative model that is adaptive to different sensor
characteristics, and the model complexity can be dynamically adjusted
on-the-fly according to different data scales.
Related papers
- Amortized Probabilistic Conditioning for Optimization, Simulation and Inference [20.314865219675056]
Amortized Conditioning Engine (ACE)
A new transformer-based meta-learning model that explicitly represents latent variables of interest.
ACE affords conditioning on both observed data and interpretable latent variables, the inclusion of priors at runtime, and outputs predictive distributions for discrete and continuous data and latents.
arXiv Detail & Related papers (2024-10-20T07:22:54Z) - Generalizable Implicit Neural Representation As a Universal Spatiotemporal Traffic Data Learner [46.866240648471894]
Spatiotemporal Traffic Data (STTD) measures the complex dynamical behaviors of the multiscale transportation system.
We present a novel paradigm to address the STTD learning problem by parameterizing STTD as an implicit neural representation.
We validate its effectiveness through extensive experiments in real-world scenarios, showcasing applications from corridor to network scales.
arXiv Detail & Related papers (2024-06-13T02:03:22Z) - Revisiting Dynamic Evaluation: Online Adaptation for Large Language
Models [88.47454470043552]
We consider the problem of online fine tuning the parameters of a language model at test time, also known as dynamic evaluation.
Online adaptation turns parameters into temporally changing states and provides a form of context-length extension with memory in weights.
arXiv Detail & Related papers (2024-03-03T14:03:48Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Kalman Filter for Online Classification of Non-Stationary Data [101.26838049872651]
In Online Continual Learning (OCL) a learning system receives a stream of data and sequentially performs prediction and training steps.
We introduce a probabilistic Bayesian online learning model by using a neural representation and a state space model over the linear predictor weights.
In experiments in multi-class classification we demonstrate the predictive ability of the model and its flexibility to capture non-stationarity.
arXiv Detail & Related papers (2023-06-14T11:41:42Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Probabilistic Point Cloud Modeling via Self-Organizing Gaussian Mixture
Models [19.10047652180224]
We present a continuous probabilistic modeling methodology for spatial point cloud data using finite Gaussian Mixture Models (GMMs)
We use a self-organizing principle from information-theoretic learning to automatically adapt the complexity of the GMM model based on the relevant information in the sensor data.
The approach is evaluated against existing point cloud modeling techniques on real-world data with varying degrees of scene complexity.
arXiv Detail & Related papers (2023-01-31T19:28:00Z) - Conditional Permutation Invariant Flows [23.740061786510417]
We present a conditional generative probabilistic model of set-valued data with a tractable log density.
These dynamics are driven by a learnable per-set-element term and pairwise interactions, both parametrized by deep neural networks.
We illustrate the utility of this model via applications including (1) complex traffic scene generation conditioned on visually specified map information, and (2) object bounding box generation conditioned directly on images.
arXiv Detail & Related papers (2022-06-17T21:43:38Z) - Dream to Explore: Adaptive Simulations for Autonomous Systems [3.0664963196464448]
We tackle the problem of learning to control dynamical systems by applying Bayesian nonparametric methods.
By employing Gaussian processes to discover latent world dynamics, we mitigate common data efficiency issues observed in reinforcement learning.
Our algorithm jointly learns a world model and policy by optimizing a variational lower bound of a log-likelihood.
arXiv Detail & Related papers (2021-10-27T04:27:28Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Tracking Performance of Online Stochastic Learners [57.14673504239551]
Online algorithms are popular in large-scale learning settings due to their ability to compute updates on the fly, without the need to store and process data in large batches.
When a constant step-size is used, these algorithms also have the ability to adapt to drifts in problem parameters, such as data or model properties, and track the optimal solution with reasonable accuracy.
We establish a link between steady-state performance derived under stationarity assumptions and the tracking performance of online learners under random walk models.
arXiv Detail & Related papers (2020-04-04T14:16:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.