On Exposing the Challenging Long Tail in Future Prediction of Traffic
Actors
- URL: http://arxiv.org/abs/2103.12474v2
- Date: Wed, 24 Mar 2021 10:29:42 GMT
- Title: On Exposing the Challenging Long Tail in Future Prediction of Traffic
Actors
- Authors: Osama Makansi, \"Ozg\"un Cicek, Yassine Marrakchi, and Thomas Brox
- Abstract summary: Most critical scenarios are less frequent and more complex than the uncriticalones.
In this paper, we address specifically the challenging scenarios at the long tail of the dataset distribution.
We show on four public datasets that this leads to improved performance on the challenging scenarios while the overall performance stays stable.
- Score: 38.782472905505124
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Predicting the states of dynamic traffic actors into the future is important
for autonomous systems to operate safelyand efficiently. Remarkably, the most
critical scenarios aremuch less frequent and more complex than the
uncriticalones. Therefore, uncritical cases dominate the prediction. In this
paper, we address specifically the challenging scenarios at the long tail of
the dataset distribution. Our analysis shows that the common losses tend to
place challenging cases suboptimally in the embedding space. As a consequence,
we propose to supplement the usual loss with aloss that places challenging
cases closer to each other. This triggers sharing information among challenging
cases andlearning specific predictive features. We show on four public datasets
that this leads to improved performance on the challenging scenarios while the
overall performance stays stable. The approach is agnostic w.r.t. the used
network architecture, input modality or viewpoint, and can be integrated into
existing solutions easily.
Related papers
- PP-TIL: Personalized Planning for Autonomous Driving with Instance-based Transfer Imitation Learning [4.533437433261497]
We propose an instance-based transfer imitation learning approach for personalized motion planning.
We extract the style feature distribution from user demonstrations, constructing the regularization term for the approximation of user style.
Compared to the baseline methods, our approach mitigates the overfitting issue caused by sparse user data.
arXiv Detail & Related papers (2024-07-26T07:51:11Z) - LargeST: A Benchmark Dataset for Large-Scale Traffic Forecasting [65.71129509623587]
Road traffic forecasting plays a critical role in smart city initiatives and has experienced significant advancements thanks to the power of deep learning.
However, the promising results achieved on current public datasets may not be applicable to practical scenarios.
We introduce the LargeST benchmark dataset, which includes a total of 8,600 sensors in California with a 5-year time coverage.
arXiv Detail & Related papers (2023-06-14T05:48:36Z) - ReasonNet: End-to-End Driving with Temporal and Global Reasoning [31.319673950804972]
We present ReasonNet, a novel end-to-end driving framework that extensively exploits both temporal and global information of the driving scene.
Our method can effectively process the interactions and relationships among features in different frames.
Reasoning about the global information of the scene can also improve overall perception performance.
arXiv Detail & Related papers (2023-05-17T18:24:43Z) - Self-SuperFlow: Self-supervised Scene Flow Prediction in Stereo
Sequences [12.650574326251023]
In this paper, we explore the extension of a self-supervised loss for scene flow prediction.
Regarding the KITTI scene flow benchmark, our method outperforms the corresponding supervised pre-training of the same network.
arXiv Detail & Related papers (2022-06-30T13:55:17Z) - Towards Federated Long-Tailed Learning [76.50892783088702]
Data privacy and class imbalance are the norm rather than the exception in many machine learning tasks.
Recent attempts have been launched to, on one side, address the problem of learning from pervasive private data, and on the other side, learn from long-tailed data.
This paper focuses on learning with long-tailed (LT) data distributions under the context of the popular privacy-preserved federated learning (FL) framework.
arXiv Detail & Related papers (2022-06-30T02:34:22Z) - Learning Self-Modulating Attention in Continuous Time Space with
Applications to Sequential Recommendation [102.24108167002252]
We propose a novel attention network, named self-modulating attention, that models the complex and non-linearly evolving dynamic user preferences.
We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-30T03:54:11Z) - Instance exploitation for learning temporary concepts from sparsely
labeled drifting data streams [15.49323098362628]
Continual learning from streaming data sources becomes more and more popular.
dealing with dynamic and everlasting problems poses new challenges.
One of the most crucial limitations is that we cannot assume having access to a finite and complete data set.
arXiv Detail & Related papers (2020-09-20T08:11:43Z) - Event Prediction in the Big Data Era: A Systematic Survey [7.3810864598379755]
Event prediction is becoming a viable option in the big data era.
This paper aims to provide a systematic and comprehensive survey of the technologies, applications, and evaluations of event prediction.
arXiv Detail & Related papers (2020-07-19T23:24:52Z) - Ambiguity in Sequential Data: Predicting Uncertain Futures with
Recurrent Models [110.82452096672182]
We propose an extension of the Multiple Hypothesis Prediction (MHP) model to handle ambiguous predictions with sequential data.
We also introduce a novel metric for ambiguous problems, which is better suited to account for uncertainties.
arXiv Detail & Related papers (2020-03-10T09:15:42Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.