Representation Learning by Ranking under multiple tasks
- URL: http://arxiv.org/abs/2103.15093v1
- Date: Sun, 28 Mar 2021 09:36:36 GMT
- Title: Representation Learning by Ranking under multiple tasks
- Authors: Lifeng Gu
- Abstract summary: The representation learning problem under multiple tasks is solved by optimizing the approximate NDCG loss.
Experiments under different learning tasks like classification, retrieval, multi-label learning, regression, self-supervised learning prove the superiority of approximate NDCG loss.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, representation learning has become the research focus of the
machine learning community. Large-scale pre-training neural networks have
become the first step to realize general intelligence. The key to the success
of neural networks lies in their abstract representation capabilities for data.
Several learning fields are actually discussing how to learn representations
and there lacks a unified perspective. We convert the representation learning
problem under multiple tasks into a ranking problem, taking the ranking problem
as a unified perspective, the representation learning under different tasks is
solved by optimizing the approximate NDCG loss. Experiments under different
learning tasks like classification, retrieval, multi-label learning,
regression, self-supervised learning prove the superiority of approximate NDCG
loss. Further, under the self-supervised learning task, the training data is
transformed by data augmentation method to improve the performance of the
approximate NDCG loss, which proves that the approximate NDCG loss can make
full use of the information of the unsupervised training data.
Related papers
- Heterogeneous Graph Neural Networks with Loss-decrease-aware Curriculum Learning [1.2224845909459847]
Heterogeneous graph neural networks (HGNNs) have achieved excellent performance in handling heterogeneous information networks (HINs)
Previous methods have started to explore the use of curriculum learning strategy to train HGNNs.
We propose a novel loss-decrease-aware training schedule (LDTS)
arXiv Detail & Related papers (2024-05-10T15:06:53Z) - Negotiated Representations to Prevent Forgetting in Machine Learning
Applications [0.0]
Catastrophic forgetting is a significant challenge in the field of machine learning.
We propose a novel method for preventing catastrophic forgetting in machine learning applications.
arXiv Detail & Related papers (2023-11-30T22:43:50Z) - Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks [9.82510084910641]
We propose a new mechanism that takes place during task boundaries, i.e., when one task finishes and another starts.
We evaluate the proposed methods on benchmark computer vision datasets including CIFAR10 and TinyImagenet.
arXiv Detail & Related papers (2023-11-02T22:00:23Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - X-Learner: Learning Cross Sources and Tasks for Universal Visual
Representation [71.51719469058666]
We propose a representation learning framework called X-Learner.
X-Learner learns the universal feature of multiple vision tasks supervised by various sources.
X-Learner achieves strong performance on different tasks without extra annotations, modalities and computational costs.
arXiv Detail & Related papers (2022-03-16T17:23:26Z) - Incremental Class Learning using Variational Autoencoders with
Similarity Learning [0.0]
Catastrophic forgetting in neural networks during incremental learning remains a challenging problem.
Our research investigates catastrophic forgetting for four well-known metric-based loss functions during incremental class learning.
The angular loss was least affected, followed by contrastive, triplet loss, and centre loss with good mining techniques.
arXiv Detail & Related papers (2021-10-04T10:19:53Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Auto-Rectify Network for Unsupervised Indoor Depth Estimation [119.82412041164372]
We establish that the complex ego-motions exhibited in handheld settings are a critical obstacle for learning depth.
We propose a data pre-processing method that rectifies training images by removing their relative rotations for effective learning.
Our results outperform the previous unsupervised SOTA method by a large margin on the challenging NYUv2 dataset.
arXiv Detail & Related papers (2020-06-04T08:59:17Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.