Towards Balanced Learning for Instance Recognition
- URL: http://arxiv.org/abs/2108.10175v1
- Date: Mon, 23 Aug 2021 13:40:45 GMT
- Title: Towards Balanced Learning for Instance Recognition
- Authors: Jiangmiao Pang, Kai Chen, Qi Li, Zhihai Xu, Huajun Feng, Jianping Shi,
Wanli Ouyang, Dahua Lin
- Abstract summary: We propose Libra R-CNN, a framework towards balanced learning for instance recognition.
It integrates IoU-balanced sampling, balanced feature pyramid, and objective re-weighting, respectively for reducing the imbalance at sample, feature, and objective level.
- Score: 149.76724446376977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instance recognition is rapidly advanced along with the developments of
various deep convolutional neural networks. Compared to the architectures of
networks, the training process, which is also crucial to the success of
detectors, has received relatively less attention. In this work, we carefully
revisit the standard training practice of detectors, and find that the
detection performance is often limited by the imbalance during the training
process, which generally consists in three levels - sample level, feature
level, and objective level. To mitigate the adverse effects caused thereby, we
propose Libra R-CNN, a simple yet effective framework towards balanced learning
for instance recognition. It integrates IoU-balanced sampling, balanced feature
pyramid, and objective re-weighting, respectively for reducing the imbalance at
sample, feature, and objective level. Extensive experiments conducted on MS
COCO, LVIS and Pascal VOC datasets prove the effectiveness of the overall
balanced design.
Related papers
- Energy Score-based Pseudo-Label Filtering and Adaptive Loss for Imbalanced Semi-supervised SAR target recognition [1.2035771704626825]
Existing semi-supervised SAR ATR algorithms show low recognition accuracy in the case of class imbalance.
This work offers a non-balanced semi-supervised SAR target recognition approach using dynamic energy scores and adaptive loss.
arXiv Detail & Related papers (2024-11-06T14:45:16Z) - Simplifying Neural Network Training Under Class Imbalance [77.39968702907817]
Real-world datasets are often highly class-imbalanced, which can adversely impact the performance of deep learning models.
The majority of research on training neural networks under class imbalance has focused on specialized loss functions, sampling techniques, or two-stage training procedures.
We demonstrate that simply tuning existing components of standard deep learning pipelines, such as the batch size, data augmentation, and label smoothing, can achieve state-of-the-art performance without any such specialized class imbalance methods.
arXiv Detail & Related papers (2023-12-05T05:52:44Z) - Overcoming Recency Bias of Normalization Statistics in Continual
Learning: Balance and Adaptation [67.77048565738728]
Continual learning involves learning a sequence of tasks and balancing their knowledge appropriately.
We propose Adaptive Balance of BN (AdaB$2$N), which incorporates appropriately a Bayesian-based strategy to adapt task-wise contributions.
Our approach achieves significant performance gains across a wide range of benchmarks.
arXiv Detail & Related papers (2023-10-13T04:50:40Z) - Assessor-Guided Learning for Continual Environments [17.181933166255448]
This paper proposes an assessor-guided learning strategy for continual learning.
An assessor guides the learning process of a base learner by controlling the direction and pace of the learning process.
The assessor is trained in a meta-learning manner with a meta-objective to boost the learning process of the base learner.
arXiv Detail & Related papers (2023-03-21T06:45:14Z) - Unbiased and Efficient Self-Supervised Incremental Contrastive Learning [31.763904668737304]
We propose a self-supervised Incremental Contrastive Learning (ICL) framework consisting of a novel Incremental InfoNCE (NCE-II) loss function.
ICL achieves up to 16.7x training speedup and 16.8x faster convergence with competitive results.
arXiv Detail & Related papers (2023-01-28T06:11:31Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Dynamic Multi-Scale Loss Optimization for Object Detection [14.256807110937622]
We study the objective imbalance of multi-scale detector training.
We propose an Adaptive Variance Weighting (AVW) to balance multi-scale loss according to the statistical variance.
We develop a novel Reinforcement Learning Optimization (RLO) to decide the weighting scheme probabilistically during training.
arXiv Detail & Related papers (2021-08-09T13:12:41Z) - Improving Music Performance Assessment with Contrastive Learning [78.8942067357231]
This study investigates contrastive learning as a potential method to improve existing MPA systems.
We introduce a weighted contrastive loss suitable for regression tasks applied to a convolutional neural network.
Our results show that contrastive-based methods are able to match and exceed SoTA performance for MPA regression tasks.
arXiv Detail & Related papers (2021-08-03T19:24:25Z) - Counterfactual Representation Learning with Balancing Weights [74.67296491574318]
Key to causal inference with observational data is achieving balance in predictive features associated with each treatment type.
Recent literature has explored representation learning to achieve this goal.
We develop an algorithm for flexible, scalable and accurate estimation of causal effects.
arXiv Detail & Related papers (2020-10-23T19:06:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.