Representation Learning by Ranking across multiple tasks
- URL: http://arxiv.org/abs/2103.15093v2
- Date: Sat, 19 Apr 2025 12:15:13 GMT
- Title: Representation Learning by Ranking across multiple tasks
- Authors: Lifeng Gu,
- Abstract summary: We convert the representation learning problem under different tasks into a ranking problem.<n>By adopting the ranking problem as a unified perspective, representation learning tasks can be solved in a unified manner.<n> Experiments under various learning tasks, such as classification, retrieval, multi-label learning, and regression, prove the superiority of the representation learning by ranking framework.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, representation learning has become the research focus of the machine learning community. Large-scale neural networks are a crucial step toward achieving general intelligence, with their success largely attributed to their ability to learn abstract representations of data. Several learning fields are actively discussing how to learn representations, yet there is a lack of a unified perspective. We convert the representation learning problem under different tasks into a ranking problem. By adopting the ranking problem as a unified perspective, representation learning tasks can be solved in a unified manner by optimizing the ranking loss. Experiments under various learning tasks, such as classification, retrieval, multi-label learning, and regression, prove the superiority of the representation learning by ranking framework. Furthermore, experiments under self-supervised learning tasks demonstrate the significant advantage of the ranking framework in processing unsupervised training data, with data augmentation techniques further enhancing its performance.
Related papers
- Heterogeneous Graph Neural Networks with Loss-decrease-aware Curriculum Learning [1.2224845909459847]
Heterogeneous graph neural networks (HGNNs) have achieved excellent performance in handling heterogeneous information networks (HINs)
Previous methods have started to explore the use of curriculum learning strategy to train HGNNs.
We propose a novel loss-decrease-aware training schedule (LDTS)
arXiv Detail & Related papers (2024-05-10T15:06:53Z) - Negotiated Representations to Prevent Forgetting in Machine Learning
Applications [0.0]
Catastrophic forgetting is a significant challenge in the field of machine learning.
We propose a novel method for preventing catastrophic forgetting in machine learning applications.
arXiv Detail & Related papers (2023-11-30T22:43:50Z) - Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks [9.82510084910641]
We propose a new mechanism that takes place during task boundaries, i.e., when one task finishes and another starts.
We evaluate the proposed methods on benchmark computer vision datasets including CIFAR10 and TinyImagenet.
arXiv Detail & Related papers (2023-11-02T22:00:23Z) - A Study of Forward-Forward Algorithm for Self-Supervised Learning [65.268245109828]
We study the performance of forward-forward vs. backpropagation for self-supervised representation learning.
Our main finding is that while the forward-forward algorithm performs comparably to backpropagation during (self-supervised) training, the transfer performance is significantly lagging behind in all the studied settings.
arXiv Detail & Related papers (2023-09-21T10:14:53Z) - Speech representation learning: Learning bidirectional encoders with
single-view, multi-view, and multi-task methods [7.1345443932276424]
This thesis focuses on representation learning for sequence data over time or space.
It aims to improve downstream sequence prediction tasks by using the learned representations.
arXiv Detail & Related papers (2023-07-25T20:38:55Z) - Accelerating exploration and representation learning with offline
pre-training [52.6912479800592]
We show that exploration and representation learning can be improved by separately learning two different models from a single offline dataset.
We show that learning a state representation using noise-contrastive estimation and a model of auxiliary reward can significantly improve the sample efficiency on the challenging NetHack benchmark.
arXiv Detail & Related papers (2023-03-31T18:03:30Z) - Functional Knowledge Transfer with Self-supervised Representation
Learning [11.566644244783305]
This work investigates the unexplored usability of self-supervised representation learning in the direction of functional knowledge transfer.
In this work, functional knowledge transfer is achieved by joint optimization of self-supervised learning pseudo task and supervised learning task.
arXiv Detail & Related papers (2023-03-12T21:14:59Z) - EfficientTrain: Exploring Generalized Curriculum Learning for Training
Visual Backbones [80.662250618795]
This paper presents a new curriculum learning approach for the efficient training of visual backbones (e.g., vision Transformers)
As an off-the-shelf method, it reduces the wall-time training cost of a wide variety of popular models by >1.5x on ImageNet-1K/22K without sacrificing accuracy.
arXiv Detail & Related papers (2022-11-17T17:38:55Z) - Feature Forgetting in Continual Representation Learning [48.89340526235304]
representations do not suffer from "catastrophic forgetting" even in plain continual learning, but little further fact is known about its characteristics.
We devise a protocol for evaluating representation in continual learning, and then use it to present an overview of the basic trends of continual representation learning.
To study the feature forgetting problem, we create a synthetic dataset to identify and visualize the prevalence of feature forgetting in neural networks.
arXiv Detail & Related papers (2022-05-26T13:38:56Z) - Task-Induced Representation Learning [14.095897879222672]
We evaluate the effectiveness of representation learning approaches for decision making in visually complex environments.
We find that representation learning generally improves sample efficiency on unseen tasks even in visually complex scenes.
arXiv Detail & Related papers (2022-04-25T17:57:10Z) - X-Learner: Learning Cross Sources and Tasks for Universal Visual
Representation [71.51719469058666]
We propose a representation learning framework called X-Learner.
X-Learner learns the universal feature of multiple vision tasks supervised by various sources.
X-Learner achieves strong performance on different tasks without extra annotations, modalities and computational costs.
arXiv Detail & Related papers (2022-03-16T17:23:26Z) - Incremental Class Learning using Variational Autoencoders with
Similarity Learning [0.0]
Catastrophic forgetting in neural networks during incremental learning remains a challenging problem.
Our research investigates catastrophic forgetting for four well-known metric-based loss functions during incremental class learning.
The angular loss was least affected, followed by contrastive, triplet loss, and centre loss with good mining techniques.
arXiv Detail & Related papers (2021-10-04T10:19:53Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Co$^2$L: Contrastive Continual Learning [69.46643497220586]
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks.
We propose a rehearsal-based continual learning algorithm that focuses on continually learning and maintaining transferable representations.
arXiv Detail & Related papers (2021-06-28T06:14:38Z) - Sense and Learn: Self-Supervision for Omnipresent Sensors [9.442811508809994]
We present a framework named Sense and Learn for representation or feature learning from raw sensory data.
It consists of several auxiliary tasks that can learn high-level and broadly useful features entirely from unannotated data without any human involvement in the tedious labeling process.
Our methodology achieves results that are competitive with the supervised approaches and close the gap through fine-tuning a network while learning the downstream tasks in most cases.
arXiv Detail & Related papers (2020-09-28T11:57:43Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Auto-Rectify Network for Unsupervised Indoor Depth Estimation [119.82412041164372]
We establish that the complex ego-motions exhibited in handheld settings are a critical obstacle for learning depth.
We propose a data pre-processing method that rectifies training images by removing their relative rotations for effective learning.
Our results outperform the previous unsupervised SOTA method by a large margin on the challenging NYUv2 dataset.
arXiv Detail & Related papers (2020-06-04T08:59:17Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.