OpenNet: Incremental Learning for Autonomous Driving Object Detection
with Balanced Loss
- URL: http://arxiv.org/abs/2311.14939v1
- Date: Sat, 25 Nov 2023 06:02:50 GMT
- Title: OpenNet: Incremental Learning for Autonomous Driving Object Detection
with Balanced Loss
- Authors: Zezhou Wang, Guitao Cao, Xidong Xi, Jiangtao Wang
- Abstract summary: The proposed method can obtain better performance than that of the existing methods.
The Experimental results upon the CODA dataset show that the proposed method can obtain better performance than that of the existing methods.
- Score: 3.761247766448379
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated driving object detection has always been a challenging task in
computer vision due to environmental uncertainties. These uncertainties include
significant differences in object sizes and encountering the class unseen. It
may result in poor performance when traditional object detection models are
directly applied to automated driving detection. Because they usually presume
fixed categories of common traffic participants, such as pedestrians and cars.
Worsely, the huge class imbalance between common and novel classes further
exacerbates performance degradation. To address the issues stated, we propose
OpenNet to moderate the class imbalance with the Balanced Loss, which is based
on Cross Entropy Loss. Besides, we adopt an inductive layer based on gradient
reshaping to fast learn new classes with limited samples during incremental
learning. To against catastrophic forgetting, we employ normalized feature
distillation. By the way, we improve multi-scale detection robustness and
unknown class recognition through FPN and energy-based detection, respectively.
The Experimental results upon the CODA dataset show that the proposed method
can obtain better performance than that of the existing methods.
Related papers
- Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models [60.87795376541144]
A world model is a neural network capable of predicting an agent's next state given past states and actions.
During end-to-end training, our policy learns how to recover from errors by aligning with states observed in human demonstrations.
We present qualitative and quantitative results, demonstrating significant improvements upon prior state of the art in closed-loop testing.
arXiv Detail & Related papers (2024-09-25T06:48:25Z) - A Reusable AI-Enabled Defect Detection System for Railway Using
Ensembled CNN [5.381374943525773]
Defect detection is crucial for ensuring the trustworthiness of railway systems.
Current approaches rely on single deep-learning models, like CNNs.
We propose a reusable AI-enabled defect detection approach.
arXiv Detail & Related papers (2023-11-24T19:45:55Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - A Novel Driver Distraction Behavior Detection Method Based on
Self-supervised Learning with Masked Image Modeling [5.1680226874942985]
Driver distraction causes a significant number of traffic accidents every year, resulting in economic losses and casualties.
Driver distraction detection primarily relies on traditional convolutional neural networks (CNN) and supervised learning methods.
This paper proposes a new self-supervised learning method based on masked image modeling for driver distraction behavior detection.
arXiv Detail & Related papers (2023-06-01T10:53:32Z) - EvCenterNet: Uncertainty Estimation for Object Detection using
Evidential Learning [26.535329379980094]
EvCenterNet is a novel uncertainty-aware 2D object detection framework.
We employ evidential learning to estimate both classification and regression uncertainties.
We train our model on the KITTI dataset and evaluate it on challenging out-of-distribution datasets.
arXiv Detail & Related papers (2023-03-06T11:07:11Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Improving Variational Autoencoder based Out-of-Distribution Detection
for Embedded Real-time Applications [2.9327503320877457]
Out-of-distribution (OD) detection is an emerging approach to address the challenge of detecting out-of-distribution in real-time.
In this paper, we show how we can robustly detect hazardous motion around autonomous driving agents.
Our methods significantly improve detection capabilities of OoD factors to unique driving scenarios, 42% better than state-of-the-art approaches.
Our model also generalized near-perfectly, 97% better than the state-of-the-art across the real-world and simulation driving data sets experimented.
arXiv Detail & Related papers (2021-07-25T07:52:53Z) - Anomaly Detection in Cybersecurity: Unsupervised, Graph-Based and
Supervised Learning Methods in Adversarial Environments [63.942632088208505]
Inherent to today's operating environment is the practice of adversarial machine learning.
In this work, we examine the feasibility of unsupervised learning and graph-based methods for anomaly detection.
We incorporate a realistic adversarial training mechanism when training our supervised models to enable strong classification performance in adversarial environments.
arXiv Detail & Related papers (2021-05-14T10:05:10Z) - Resolving Class Imbalance in Object Detection with Weighted Cross
Entropy Losses [0.0]
Object detection is an important task in computer vision which serves a lot of real-world applications such as autonomous driving, surveillance and robotics.
There are still limitations in performance of detectors when it comes to specialized datasets with uneven object class distributions.
We propose to explore and overcome such problem by application of several weighted variants of Cross Entropy loss.
arXiv Detail & Related papers (2020-06-02T06:36:12Z) - Identifying and Compensating for Feature Deviation in Imbalanced Deep
Learning [59.65752299209042]
We investigate learning a ConvNet under such a scenario.
We found that a ConvNet significantly over-fits the minor classes.
We propose to incorporate class-dependent temperatures (CDT) training ConvNet.
arXiv Detail & Related papers (2020-01-06T03:52:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.