NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition
- URL: http://arxiv.org/abs/2306.16709v3
- Date: Thu, 18 Jul 2024 02:38:49 GMT
- Title: NCL++: Nested Collaborative Learning for Long-Tailed Visual Recognition
- Authors: Zichang Tan, Jun Li, Jinhao Du, Jun Wan, Zhen Lei, Guodong Guo,
- Abstract summary: We propose a Nested Collaborative Learning (NCL++) which tackles the long-tailed learning problem by a collaborative learning.
To achieve the collaborative learning in long-tailed learning, the balanced online distillation is proposed.
In order to improve the meticulous distinguishing ability on the confusing categories, we further propose a Hard Category Mining.
- Score: 63.90327120065928
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-tailed visual recognition has received increasing attention in recent years. Due to the extremely imbalanced data distribution in long-tailed learning, the learning process shows great uncertainties. For example, the predictions of different experts on the same image vary remarkably despite the same training settings. To alleviate the uncertainty, we propose a Nested Collaborative Learning (NCL++) which tackles the long-tailed learning problem by a collaborative learning. To be specific, the collaborative learning consists of two folds, namely inter-expert collaborative learning (InterCL) and intra-expert collaborative learning (IntraCL). In-terCL learns multiple experts collaboratively and concurrently, aiming to transfer the knowledge among different experts. IntraCL is similar to InterCL, but it aims to conduct the collaborative learning on multiple augmented copies of the same image within the single expert. To achieve the collaborative learning in long-tailed learning, the balanced online distillation is proposed to force the consistent predictions among different experts and augmented copies, which reduces the learning uncertainties. Moreover, in order to improve the meticulous distinguishing ability on the confusing categories, we further propose a Hard Category Mining (HCM), which selects the negative categories with high predicted scores as the hard categories. Then, the collaborative learning is formulated in a nested way, in which the learning is conducted on not just all categories from a full perspective but some hard categories from a partial perspective. Extensive experiments manifest the superiority of our method with outperforming the state-of-the-art whether with using a single model or an ensemble. The code will be publicly released.
Related papers
- Combining Supervised Learning and Reinforcement Learning for Multi-Label Classification Tasks with Partial Labels [27.53399899573121]
We propose an RL-based framework combining the exploration ability of reinforcement learning and the exploitation ability of supervised learning.
Experimental results across various tasks, including document-level relation extraction, demonstrate the generalization and effectiveness of our framework.
arXiv Detail & Related papers (2024-06-24T03:36:19Z) - Learning More Generalized Experts by Merging Experts in Mixture-of-Experts [0.5221459608786241]
We show that incorporating a shared layer in a mixture-of-experts can lead to performance degradation.
We merge the two most frequently selected experts and update the least frequently selected expert using the combination of experts.
Our algorithm enhances transfer learning and mitigates catastrophic forgetting when applied to multi-domain task incremental learning.
arXiv Detail & Related papers (2024-05-19T11:55:48Z) - Quiz-based Knowledge Tracing [61.9152637457605]
Knowledge tracing aims to assess individuals' evolving knowledge states according to their learning interactions.
QKT achieves state-of-the-art performance compared to existing methods.
arXiv Detail & Related papers (2023-04-05T12:48:42Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Contrastive Learning with Boosted Memorization [36.957895270908324]
Self-supervised learning has achieved a great success in the representation learning of visual and textual data.
Recent attempts to consider self-supervised long-tailed learning are made by rebalancing in the loss perspective or the model perspective.
We propose a novel Boosted Contrastive Learning (BCL) method to enhance the long-tailed learning in the label-unaware context.
arXiv Detail & Related papers (2022-05-25T11:54:22Z) - Nested Collaborative Learning for Long-Tailed Visual Recognition [71.6074806468641]
NCL consists of two core components, namely Nested Individual Learning (NIL) and Nested Balanced Online Distillation (NBOD)
To learn representations more thoroughly, both NIL and NBOD are formulated in a nested way, in which the learning is conducted on not just all categories from a full perspective but some hard categories from a partial perspective.
In the NCL, the learning from two perspectives is nested, highly related and complementary, and helps the network to capture not only global and robust features but also meticulous distinguishing ability.
arXiv Detail & Related papers (2022-03-29T08:55:39Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z) - Learning From Multiple Experts: Self-paced Knowledge Distillation for
Long-tailed Classification [106.08067870620218]
We propose a self-paced knowledge distillation framework, termed Learning From Multiple Experts (LFME)
We refer to these models as 'Experts', and the proposed LFME framework aggregates the knowledge from multiple 'Experts' to learn a unified student model.
We conduct extensive experiments and demonstrate that our method is able to achieve superior performances compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-06T12:57:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.