Informed Priors for Knowledge Integration in Trajectory Prediction
- URL: http://arxiv.org/abs/2211.00348v1
- Date: Tue, 1 Nov 2022 09:37:14 GMT
- Title: Informed Priors for Knowledge Integration in Trajectory Prediction
- Authors: Christian Schlauch and Nadja Klein and Christian Wirth
- Abstract summary: We propose an informed machine learning method, based on continual learning.
This allows the integration of arbitrary, prior knowledge, potentially from multiple sources, and does not require specific architectures.
We exemplify our approach by applying it to a state-of-the-art trajectory predictor for autonomous driving.
- Score: 0.225596179391365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Informed machine learning methods allow the integration of prior knowledge
into learning systems. This can increase accuracy and robustness or reduce data
needs. However, existing methods often assume hard constraining knowledge, that
does not require to trade-off prior knowledge with observations, but can be
used to directly reduce the problem space. Other approaches use specific,
architectural changes as representation of prior knowledge, limiting
applicability. We propose an informed machine learning method, based on
continual learning. This allows the integration of arbitrary, prior knowledge,
potentially from multiple sources, and does not require specific architectures.
Furthermore, our approach enables probabilistic and multi-modal predictions,
that can improve predictive accuracy and robustness. We exemplify our approach
by applying it to a state-of-the-art trajectory predictor for autonomous
driving. This domain is especially dependent on informed learning approaches,
as it is subject to an overwhelming large variety of possible environments and
very rare events, while requiring robust and accurate predictions. We evaluate
our model on a commonly used benchmark dataset, only using data already
available in a conventional setup. We show that our method outperforms both
non-informed and informed learning methods, that are often used in the
literature. Furthermore, we are able to compete with a conventional baseline,
even using half as many observation examples.
Related papers
- Machine Learning for predicting chaotic systems [0.0]
We show that well-tuned simple methods, as well as untuned baseline methods, often outperform state-of-the-art deep learning models.
These findings underscore the importance of matching prediction methods to data characteristics and available computational resources.
arXiv Detail & Related papers (2024-07-29T16:34:47Z) - Stop overkilling simple tasks with black-box models and use transparent
models instead [57.42190785269343]
Deep learning approaches are able to extract features autonomously from raw data.
This allows for bypassing the feature engineering process.
Deep learning strategies often outperform traditional models in terms of accuracy.
arXiv Detail & Related papers (2023-02-06T14:28:49Z) - Uncertainty Estimation based on Geometric Separation [13.588210692213568]
In machine learning, accurately predicting the probability that a specific input is correct is crucial for risk management.
We put forward a novel geometric-based approach for improving uncertainty estimations in machine learning models.
arXiv Detail & Related papers (2023-01-11T13:19:24Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - Knowledge Augmented Machine Learning with Applications in Autonomous
Driving: A Survey [37.84106999449108]
This work provides an overview of existing techniques and methods that combine data-driven models with existing knowledge.
The identified approaches are structured according to the categories knowledge integration, extraction and conformity.
In particular, we address the application of the presented methods in the field of autonomous driving.
arXiv Detail & Related papers (2022-05-10T07:25:32Z) - Masked prediction tasks: a parameter identifiability view [49.533046139235466]
We focus on the widely used self-supervised learning method of predicting masked tokens.
We show that there is a rich landscape of possibilities, out of which some prediction tasks yield identifiability, while others do not.
arXiv Detail & Related papers (2022-02-18T17:09:32Z) - Using Time-Series Privileged Information for Provably Efficient Learning
of Prediction Models [6.7015527471908625]
We study prediction of future outcomes with supervised models that use privileged information during learning.
privileged information comprises samples of time series observed between the baseline time of prediction and the future outcome.
We show that our approach is generally preferable to classical learning, particularly when data is scarce.
arXiv Detail & Related papers (2021-10-28T10:07:29Z) - Unified Instance and Knowledge Alignment Pretraining for Aspect-based
Sentiment Analysis [96.53859361560505]
Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect.
There always exists severe domain shift between the pretraining and downstream ABSA datasets.
We introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline.
arXiv Detail & Related papers (2021-10-26T04:03:45Z) - Injecting Knowledge in Data-driven Vehicle Trajectory Predictors [82.91398970736391]
Vehicle trajectory prediction tasks have been commonly tackled from two perspectives: knowledge-driven or data-driven.
In this paper, we propose to learn a "Realistic Residual Block" (RRB) which effectively connects these two perspectives.
Our proposed method outputs realistic predictions by confining the residual range and taking into account its uncertainty.
arXiv Detail & Related papers (2021-03-08T16:03:09Z) - Model-Based Deep Learning [155.063817656602]
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques.
Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance.
We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches.
arXiv Detail & Related papers (2020-12-15T16:29:49Z) - A Review of Meta-level Learning in the Context of Multi-component,
Multi-level Evolving Prediction Systems [6.810856082577402]
The exponential growth of volume, variety and velocity of data is raising the need for investigations of automated or semi-automated ways to extract useful patterns from the data.
It requires deep expert knowledge and extensive computational resources to find the most appropriate mapping of learning methods for a given problem.
There is a need for an intelligent recommendation engine that can advise what is the best learning algorithm for a dataset.
arXiv Detail & Related papers (2020-07-17T14:14:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.