Nested Collaborative Learning for Long-Tailed Visual Recognition
- URL: http://arxiv.org/abs/2203.15359v1
- Date: Tue, 29 Mar 2022 08:55:39 GMT
- Title: Nested Collaborative Learning for Long-Tailed Visual Recognition
- Authors: Jun Li, Zichang Tan, Jun Wan, Zhen Lei and Guodong Guo
- Abstract summary: NCL consists of two core components, namely Nested Individual Learning (NIL) and Nested Balanced Online Distillation (NBOD)
To learn representations more thoroughly, both NIL and NBOD are formulated in a nested way, in which the learning is conducted on not just all categories from a full perspective but some hard categories from a partial perspective.
In the NCL, the learning from two perspectives is nested, highly related and complementary, and helps the network to capture not only global and robust features but also meticulous distinguishing ability.
- Score: 71.6074806468641
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The networks trained on the long-tailed dataset vary remarkably, despite the
same training settings, which shows the great uncertainty in long-tailed
learning. To alleviate the uncertainty, we propose a Nested Collaborative
Learning (NCL), which tackles the problem by collaboratively learning multiple
experts together. NCL consists of two core components, namely Nested Individual
Learning (NIL) and Nested Balanced Online Distillation (NBOD), which focus on
the individual supervised learning for each single expert and the knowledge
transferring among multiple experts, respectively. To learn representations
more thoroughly, both NIL and NBOD are formulated in a nested way, in which the
learning is conducted on not just all categories from a full perspective but
some hard categories from a partial perspective. Regarding the learning in the
partial perspective, we specifically select the negative categories with high
predicted scores as the hard categories by using a proposed Hard Category
Mining (HCM). In the NCL, the learning from two perspectives is nested, highly
related and complementary, and helps the network to capture not only global and
robust features but also meticulous distinguishing ability. Moreover,
self-supervision is further utilized for feature enhancement. Extensive
experiments manifest the superiority of our method with outperforming the
state-of-the-art whether by using a single model or an ensemble.
Related papers
- Combining Supervised Learning and Reinforcement Learning for Multi-Label Classification Tasks with Partial Labels [27.53399899573121]
We propose an RL-based framework combining the exploration ability of reinforcement learning and the exploitation ability of supervised learning.
Experimental results across various tasks, including document-level relation extraction, demonstrate the generalization and effectiveness of our framework.
arXiv Detail & Related papers (2024-06-24T03:36:19Z) - NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition [63.90327120065928]
We propose a Nested Collaborative Learning (NCL++) which tackles the long-tailed learning problem by a collaborative learning.
To achieve the collaborative learning in long-tailed learning, the balanced online distillation is proposed.
In order to improve the meticulous distinguishing ability on the confusing categories, we further propose a Hard Category Mining.
arXiv Detail & Related papers (2023-06-29T06:10:40Z) - Hierarchical Contrastive Learning Enhanced Heterogeneous Graph Neural
Network [59.860534520941485]
Heterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN)
Recently, contrastive learning, a self-supervised method, becomes one of the most exciting learning paradigms and shows great potential when there are no labels.
In this paper, we study the problem of self-supervised HGNNs and propose a novel co-contrastive learning mechanism for HGNNs, named HeCo.
arXiv Detail & Related papers (2023-04-24T16:17:21Z) - Deep Negative Correlation Classification [82.45045814842595]
Existing deep ensemble methods naively train many different models and then aggregate their predictions.
We propose deep negative correlation classification (DNCC)
DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated.
arXiv Detail & Related papers (2022-12-14T07:35:20Z) - Collaboration of Pre-trained Models Makes Better Few-shot Learner [49.89134194181042]
Few-shot classification requires deep neural networks to learn generalized representations only from limited training images.
Recently, CLIP-based methods have shown promising few-shot performance benefited from the contrastive language-image pre-training.
We propose CoMo, a Collaboration of pre-trained Models that incorporates diverse prior knowledge from various pre-training paradigms for better few-shot learning.
arXiv Detail & Related papers (2022-09-25T16:23:12Z) - Semi-Supervised Learning with Multi-Head Co-Training [11.219776340005296]
Co-training, extended from self-training, is one of the frameworks for semi-supervised learning.
We present a simple and efficient algorithm Multi-Head Co-Training.
arXiv Detail & Related papers (2021-07-10T08:53:14Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.