Mixup Your Own Pairs
- URL: http://arxiv.org/abs/2309.16633v2
- Date: Fri, 29 Sep 2023 04:22:54 GMT
- Title: Mixup Your Own Pairs
- Authors: Yilei Wu, Zijian Dong, Chongyao Chen, Wangchunshu Zhou, Juan Helen
Zhou
- Abstract summary: We argue that the potential of contrastive learning for regression has been overshadowed due to the neglect of two crucial aspects: ordinality-awareness and hardness.
Specifically, we propose Supervised Contrastive Learning for Regression with Mixup (SupReMix)
It takes anchor-inclusive mixtures (mixup of the anchor and a distinct negative sample) as hard negative pairs and anchor-exclusive mixtures (mixup of two distinct negative samples) as hard positive pairs at the embedding level.
- Score: 22.882694278940598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In representation learning, regression has traditionally received less
attention than classification. Directly applying representation learning
techniques designed for classification to regression often results in
fragmented representations in the latent space, yielding sub-optimal
performance. In this paper, we argue that the potential of contrastive learning
for regression has been overshadowed due to the neglect of two crucial aspects:
ordinality-awareness and hardness. To address these challenges, we advocate
"mixup your own contrastive pairs for supervised contrastive regression",
instead of relying solely on real/augmented samples. Specifically, we propose
Supervised Contrastive Learning for Regression with Mixup (SupReMix). It takes
anchor-inclusive mixtures (mixup of the anchor and a distinct negative sample)
as hard negative pairs and anchor-exclusive mixtures (mixup of two distinct
negative samples) as hard positive pairs at the embedding level. This strategy
formulates harder contrastive pairs by integrating richer ordinal information.
Through extensive experiments on six regression datasets including 2D images,
volumetric images, text, tabular data, and time-series signals, coupled with
theoretical analysis, we demonstrate that SupReMix pre-training fosters
continuous ordered representations of regression data, resulting in significant
improvement in regression performance. Furthermore, SupReMix is superior to
other approaches in a range of regression challenges including transfer
learning, imbalanced training data, and scenarios with fewer training samples.
Related papers
- Efficient Medical Image Restoration via Reliability Guided Learning in Frequency Domain [29.81704480466466]
Medical image restoration tasks aim to recover high-quality images from degraded observations, exhibiting emergent desires in many clinical scenarios.
Existing deep learning-based restoration methods struggle with rendering computationally-efficient reconstruction results.
We present LRformer, a Lightweight Transformer-based method via Reliability-guided learning in the frequency domain.
arXiv Detail & Related papers (2025-04-15T15:26:28Z) - Q-PART: Quasi-Periodic Adaptive Regression with Test-time Training for Pediatric Left Ventricular Ejection Fraction Regression [45.69922532213079]
We address the challenge of adaptive pediatric Left Ventricular Ejection Fraction (LVEF) assessment.
We propose a novel textbfQuasi-textbfPeriodic textbfAdaptive textbfRegression with textbfTest-time Training (Q-PART) framework.
arXiv Detail & Related papers (2025-03-06T06:24:51Z) - Benchmarking Robustness of Contrastive Learning Models for Medical Image-Report Retrieval [2.9801426627439453]
This study benchmarks the robustness of four state-of-the-art contrastive learning models: CLIP, CXR-RePaiR, MedCLIP, and CXR-CLIP.
Our findings reveal that all evaluated models are highly sensitive to out-of-distribution data.
By addressing these limitations, we can develop more reliable cross-domain retrieval models for medical applications.
arXiv Detail & Related papers (2025-01-15T20:37:04Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Over-training with Mixup May Hurt Generalization [32.64382185990981]
We report a previously unobserved phenomenon in Mixup training.
On a number of standard datasets, the performance of Mixup-trained models starts to decay after training for a large number of epochs.
We show theoretically that Mixup training may introduce undesired data-dependent label noises to the synthesized data.
arXiv Detail & Related papers (2023-03-02T18:37:34Z) - Hybrid Contrastive Constraints for Multi-Scenario Ad Ranking [38.666592866591344]
Multi-scenario ad ranking aims at leveraging the data from multiple domains or channels for training a unified ranking model.
We propose a Hybrid Contrastive Constrained approach (HC2) for multi-scenario ad ranking.
arXiv Detail & Related papers (2023-02-06T09:15:39Z) - Rethinking Semi-Supervised Medical Image Segmentation: A
Variance-Reduction Perspective [51.70661197256033]
We propose ARCO, a semi-supervised contrastive learning framework with stratified group theory for medical image segmentation.
We first propose building ARCO through the concept of variance-reduced estimation and show that certain variance-reduction techniques are particularly beneficial in pixel/voxel-level segmentation tasks.
We experimentally validate our approaches on eight benchmarks, i.e., five 2D/3D medical and three semantic segmentation datasets, with different label settings.
arXiv Detail & Related papers (2023-02-03T13:50:25Z) - GraVIS: Grouping Augmented Views from Independent Sources for
Dermatology Analysis [52.04899592688968]
We propose GraVIS, which is specifically optimized for learning self-supervised features from dermatology images.
GraVIS significantly outperforms its transfer learning and self-supervised learning counterparts in both lesion segmentation and disease classification tasks.
arXiv Detail & Related papers (2023-01-11T11:38:37Z) - C-Mixup: Improving Generalization in Regression [71.10418219781575]
Mixup algorithm improves generalization by linearly interpolating a pair of examples and their corresponding labels.
We propose C-Mixup, which adjusts the sampling probability based on the similarity of the labels.
C-Mixup achieves 6.56%, 4.76%, 5.82% improvements in in-distribution generalization, task generalization, and out-of-distribution robustness, respectively.
arXiv Detail & Related papers (2022-10-11T20:39:38Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - DoubleMix: Simple Interpolation-Based Data Augmentation for Text
Classification [56.817386699291305]
This paper proposes a simple yet effective data augmentation approach termed DoubleMix.
DoubleMix first generates several perturbed samples for each training data.
It then uses the perturbed data and original data to carry out a two-step in the hidden space of neural models.
arXiv Detail & Related papers (2022-09-12T15:01:04Z) - Hard Negative Sampling Strategies for Contrastive Representation
Learning [4.1531215150301035]
UnReMix is a hard negative sampling strategy that takes into account anchor similarity, model uncertainty and representativeness.
Experimental results on several benchmarks show that UnReMix improves negative sample selection, and subsequently downstream performance when compared to state-of-the-art contrastive learning methods.
arXiv Detail & Related papers (2022-06-02T17:55:15Z) - Maximum Entropy on Erroneous Predictions (MEEP): Improving model
calibration for medical image segmentation [10.159176702917788]
We introduce MEEP, a training strategy for segmentation networks which selectively penalizes overconfident predictions, focusing only on misclassified pixels.
We benchmark the proposed strategy in two challenging segmentation tasks: white matter hyperintensity lesions in magnetic resonance images (MRI) of the brain, and atrial segmentation in cardiac MRI.
arXiv Detail & Related papers (2021-12-22T20:34:20Z) - Adaptive Contrast for Image Regression in Computer-Aided Disease
Assessment [22.717658723840255]
We propose the first contrastive learning framework for deep image regression, namely AdaCon.
AdaCon consists of a feature learning branch via a novel adaptive-margin contrastive loss and a regression prediction branch.
We demonstrate the effectiveness of AdaCon on two medical image regression tasks.
arXiv Detail & Related papers (2021-12-22T07:13:02Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - MixRL: Data Mixing Augmentation for Regression using Reinforcement
Learning [2.1345682889327837]
Existing techniques for data augmentation largely focus on classification tasks and do not readily apply to regression tasks.
We show that mixing examples that either have a large data or label distance may have an increasingly-negative effect on model performance.
We propose MixRL, a data augmentation meta learning framework for regression that learns for each example how many nearest neighbors it should be mixed with for the best model performance.
arXiv Detail & Related papers (2021-06-07T07:01:39Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - ReMix: Towards Image-to-Image Translation with Limited Data [154.71724970593036]
We propose a data augmentation method (ReMix) to tackle this issue.
We interpolate training samples at the feature level and propose a novel content loss based on the perceptual relations among samples.
The proposed approach effectively reduces the ambiguity of generation and renders content-preserving results.
arXiv Detail & Related papers (2021-03-31T06:24:10Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - On Mixup Regularization [16.748910388577308]
Mixup is a data augmentation technique that creates new examples as convex combinations of training points and labels.
We show how the random perturbation of the new interpretation of Mixup induces multiple known regularization schemes.
arXiv Detail & Related papers (2020-06-10T20:11:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.