Rank-N-Contrast: Learning Continuous Representations for Regression
- URL: http://arxiv.org/abs/2210.01189v2
- Date: Tue, 10 Oct 2023 02:03:13 GMT
- Title: Rank-N-Contrast: Learning Continuous Representations for Regression
- Authors: Kaiwen Zha, Peng Cao, Jeany Son, Yuzhe Yang, Dina Katabi
- Abstract summary: Rank-N-Contrast (RNC) is a framework that learns continuous representations for regression by contrasting samples against each other based on their rankings in the target space.
RNC guarantees the desired order of learned representations in accordance with the target orders.
RNC achieves state-of-the-art performance, highlighting its intriguing properties including better data efficiency, robustness to spurious targets and data corruptions.
- Score: 28.926518084216607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep regression models typically learn in an end-to-end fashion without
explicitly emphasizing a regression-aware representation. Consequently, the
learned representations exhibit fragmentation and fail to capture the
continuous nature of sample orders, inducing suboptimal results across a wide
range of regression tasks. To fill the gap, we propose Rank-N-Contrast (RNC), a
framework that learns continuous representations for regression by contrasting
samples against each other based on their rankings in the target space. We
demonstrate, theoretically and empirically, that RNC guarantees the desired
order of learned representations in accordance with the target orders, enjoying
not only better performance but also significantly improved robustness,
efficiency, and generalization. Extensive experiments using five real-world
regression datasets that span computer vision, human-computer interaction, and
healthcare verify that RNC achieves state-of-the-art performance, highlighting
its intriguing properties including better data efficiency, robustness to
spurious targets and data corruptions, and generalization to distribution
shifts. Code is available at: https://github.com/kaiwenzha/Rank-N-Contrast.
Related papers
- Contrastive Learning for Regression on Hyperspectral Data [4.931067393619175]
We propose a contrastive learning framework for the regression tasks for hyperspectral data.
Experiments on synthetic and real hyperspectral datasets show that the proposed framework and transformations significantly improve the performance of regression models.
arXiv Detail & Related papers (2024-02-12T21:33:46Z) - NeRCC: Nested-Regression Coded Computing for Resilient Distributed
Prediction Serving Systems [18.85527080950587]
NeRCC is a general straggler-resistant framework for approximate coded computing.
NeRCC accurately approximates the original predictions in a wide range of stragglers, outperforming the state-of-the-art by up to 23%.
arXiv Detail & Related papers (2024-02-06T20:31:15Z) - Implicit Counterfactual Data Augmentation for Robust Learning [24.795542869249154]
This study proposes an Implicit Counterfactual Data Augmentation method to remove spurious correlations and make stable predictions.
Experiments have been conducted across various biased learning scenarios covering both image and text datasets.
arXiv Detail & Related papers (2023-04-26T10:36:40Z) - GUESR: A Global Unsupervised Data-Enhancement with Bucket-Cluster
Sampling for Sequential Recommendation [58.6450834556133]
We propose graph contrastive learning to enhance item representations with complex associations from the global view.
We extend the CapsNet module with the elaborately introduced target-attention mechanism to derive users' dynamic preferences.
Our proposed GUESR could not only achieve significant improvements but also could be regarded as a general enhancement strategy.
arXiv Detail & Related papers (2023-03-01T05:46:36Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Enhancing Sequential Recommendation with Graph Contrastive Learning [64.05023449355036]
This paper proposes a novel sequential recommendation framework, namely Graph Contrastive Learning for Sequential Recommendation (GCL4SR)
GCL4SR employs a Weighted Item Transition Graph (WITG), built based on interaction sequences of all users, to provide global context information for each interaction and weaken the noise information in the sequence data.
Experiments on real-world datasets demonstrate that GCL4SR consistently outperforms state-of-the-art sequential recommendation methods.
arXiv Detail & Related papers (2022-05-30T03:53:31Z) - Augmentation-induced Consistency Regularization for Classification [25.388324221293203]
We propose a consistency regularization framework based on data augmentation, called CR-Aug.
CR-Aug forces the output distributions of different sub models generated by data augmentation to be consistent with each other.
We implement CR-Aug to image and audio classification tasks and conduct extensive experiments to verify its effectiveness.
arXiv Detail & Related papers (2022-05-25T03:15:36Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - Efficient Facial Feature Learning with Wide Ensemble-based Convolutional
Neural Networks [20.09586211332088]
We present experiments on Ensembles with Shared Representations based on convolutional networks.
We show that redundancy and computational load can be dramatically reduced by varying the branching level of the ESR.
Experiments on large-scale datasets suggest that ESRs reduce the remaining residual generalization error.
arXiv Detail & Related papers (2020-01-17T14:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.