SSL-Lanes: Self-Supervised Learning for Motion Forecasting in Autonomous
Driving
- URL: http://arxiv.org/abs/2206.14116v1
- Date: Tue, 28 Jun 2022 16:23:25 GMT
- Title: SSL-Lanes: Self-Supervised Learning for Motion Forecasting in Autonomous
Driving
- Authors: Prarthana Bhattacharyya, Chengjie Huang and Krzysztof Czarnecki
- Abstract summary: Self-supervised learning (SSL) is an emerging technique to train convolutional neural networks (CNNs) and graph neural networks (GNNs)
In this study, we report the first systematic exploration of incorporating self-supervision into motion forecasting.
- Score: 9.702784248870522
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised learning (SSL) is an emerging technique that has been
successfully employed to train convolutional neural networks (CNNs) and graph
neural networks (GNNs) for more transferable, generalizable, and robust
representation learning. However its potential in motion forecasting for
autonomous driving has rarely been explored. In this study, we report the first
systematic exploration and assessment of incorporating self-supervision into
motion forecasting. We first propose to investigate four novel self-supervised
learning tasks for motion forecasting with theoretical rationale and
quantitative and qualitative comparisons on the challenging large-scale
Argoverse dataset. Secondly, we point out that our auxiliary SSL-based learning
setup not only outperforms forecasting methods which use transformers,
complicated fusion mechanisms and sophisticated online dense goal candidate
optimization algorithms in terms of performance accuracy, but also has low
inference time and architectural complexity. Lastly, we conduct several
experiments to understand why SSL improves motion forecasting. Code is
open-sourced at \url{https://github.com/AutoVision-cloud/SSL-Lanes}.
Related papers
- TPLLM: A Traffic Prediction Framework Based on Pretrained Large Language Models [27.306180426294784]
We introduce TPLLM, a novel traffic prediction framework leveraging Large Language Models (LLMs)
In this framework, we construct a sequence embedding layer based on Conal Neural Networks (LoCNNs) and a graph embedding layer based on Graph Contemporalal Networks (GCNs) to extract sequence features and spatial features.
Experiments on two real-world datasets demonstrate commendable performance in both full-sample and few-shot prediction scenarios.
arXiv Detail & Related papers (2024-03-04T17:08:57Z) - Forecast-MAE: Self-supervised Pre-training for Motion Forecasting with
Masked Autoencoders [7.133110402648305]
This study explores the application of self-supervised learning to the task of motion forecasting.
Forecast-MAE is an extension of the mask autoencoders framework that is specifically designed for self-supervised learning of the motion forecasting task.
arXiv Detail & Related papers (2023-08-19T02:27:51Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Understanding and Improving the Role of Projection Head in
Self-Supervised Learning [77.59320917894043]
Self-supervised learning (SSL) aims to produce useful feature representations without access to human-labeled data annotations.
Current contrastive learning approaches append a parametrized projection head to the end of some backbone network to optimize the InfoNCE objective.
This raises a fundamental question: Why is a learnable projection head required if we are to discard it after training?
arXiv Detail & Related papers (2022-12-22T05:42:54Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Effective Self-supervised Pre-training on Low-compute Networks without
Distillation [6.530011859253459]
Reported performance of self-supervised learning has trailed behind standard supervised pre-training by a large margin.
Most prior works attribute this poor performance to the capacity bottleneck of the low-compute networks.
We take a closer at what are the detrimental factors causing the practical limitations, and whether they are intrinsic to the self-supervised low-compute setting.
arXiv Detail & Related papers (2022-10-06T10:38:07Z) - Hebbian Continual Representation Learning [9.54473759331265]
Continual Learning aims to bring machine learning into a more realistic scenario.
We investigate whether biologically inspired Hebbian learning is useful for tackling continual challenges.
arXiv Detail & Related papers (2022-06-28T09:21:03Z) - Transfer Learning Based Efficient Traffic Prediction with Limited
Training Data [3.689539481706835]
Efficient prediction of internet traffic is an essential part of Self Organizing Network (SON) for ensuring proactive management.
Deep sequence model in network traffic prediction with limited training data has not been studied extensively in the current works.
We investigated and evaluated the performance of the deep transfer learning technique in traffic prediction with inadequate historical data.
arXiv Detail & Related papers (2022-05-09T14:44:39Z) - When Does Self-Supervision Help Graph Convolutional Networks? [118.37805042816784]
Self-supervision as an emerging technique has been employed to train convolutional neural networks (CNNs) for more transferrable, generalizable, and robust representation learning of images.
In this study, we report the first systematic exploration of incorporating self-supervision into graph convolutional networks (GCNs)
Our results show that, with properly designed task forms and incorporation mechanisms, self-supervision benefits GCNs in gaining more generalizability and robustness.
arXiv Detail & Related papers (2020-06-16T13:29:48Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.