Improve Representation for Imbalanced Regression through Geometric Constraints
- URL: http://arxiv.org/abs/2503.00876v1
- Date: Sun, 02 Mar 2025 12:31:34 GMT
- Title: Improve Representation for Imbalanced Regression through Geometric Constraints
- Authors: Zijian Dong, Yilei Wu, Chongyao Chen, Yingtian Zou, Yichi Zhang, Juan Helen Zhou,
- Abstract summary: We focus on ensuring uniformity in the latent space for imbalanced regression through two key losses.<n>The enveloping loss encourages the induced trace to uniformly occupy the surface of a hypersphere, while the homogeneity loss ensures smoothness.<n>Our method integrates these geometric principles into the data representations via a Surrogate-driven Representation Learning framework.
- Score: 8.903197320328164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In representation learning, uniformity refers to the uniform feature distribution in the latent space (i.e., unit hypersphere). Previous work has shown that improving uniformity contributes to the learning of under-represented classes. However, most of the previous work focused on classification; the representation space of imbalanced regression remains unexplored. Classification-based methods are not suitable for regression tasks because they cluster features into distinct groups without considering the continuous and ordered nature essential for regression. In a geometric aspect, we uniquely focus on ensuring uniformity in the latent space for imbalanced regression through two key losses: enveloping and homogeneity. The enveloping loss encourages the induced trace to uniformly occupy the surface of a hypersphere, while the homogeneity loss ensures smoothness, with representations evenly spaced at consistent intervals. Our method integrates these geometric principles into the data representations via a Surrogate-driven Representation Learning (SRL) framework. Experiments with real-world regression and operator learning tasks highlight the importance of uniformity in imbalanced regression and validate the efficacy of our geometry-based loss functions.
Related papers
- Bridging Critical Gaps in Convergent Learning: How Representational Alignment Evolves Across Layers, Training, and Distribution Shifts [1.9458156037869137]
Much existing work relies on a limited set of metrics, overlooking transformation invariances required for proper alignment.
A second critical gap lies in understanding when alignment emerges during training.
Contrary to expectations that convergence builds gradually with task-specific learning, our findings reveal that nearly all convergence occurs within the first epoch.
These findings fill critical gaps in our understanding of representational convergence, with implications for neuroscience and AI.
arXiv Detail & Related papers (2025-02-26T00:04:24Z) - ACCon: Angle-Compensated Contrastive Regularizer for Deep Regression [28.491074229136014]
In deep regression, capturing the relationship among continuous labels in feature space is a fundamental challenge that has attracted increasing interest.<n>Existing approaches often rely on order-aware representation learning or distance-based weighting.<n>We propose an angle-compensated contrastive regularizer for deep regression, which adjusts the cosine distance between anchor and negative samples.
arXiv Detail & Related papers (2025-01-13T03:55:59Z) - Deep Regression Representation Learning with Topology [57.203857643599875]
We study how the effectiveness of a regression representation is influenced by its topology.
We introduce PH-Reg, a regularizer that matches the intrinsic dimension and topology of the feature space with the target space.
Experiments on synthetic and real-world regression tasks demonstrate the benefits of PH-Reg.
arXiv Detail & Related papers (2024-04-22T06:28:41Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Gradient Aligned Regression via Pairwise Losses [40.676226700035585]
Gradient Aligned Regression (GAR) is a competitive alternative method in label space.
Running time experiments demonstrate the superior efficiency of the proposed GAR over existing methods.
We limit our current scope as regression on the clean data setting without noises, outliers or distributional shifts.
arXiv Detail & Related papers (2024-02-08T23:43:53Z) - Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Uniform Consistency in Nonparametric Mixture Models [12.382836502781258]
We study uniform consistency in nonparametric mixture models and mixed regression models.
In the case of mixed regression, we prove $L1$ convergence of the regression functions while allowing for the component regression functions to intersect arbitrarily often.
arXiv Detail & Related papers (2021-08-31T17:53:52Z) - On dissipative symplectic integration with applications to
gradient-based optimization [77.34726150561087]
We propose a geometric framework in which discretizations can be realized systematically.
We show that a generalization of symplectic to nonconservative and in particular dissipative Hamiltonian systems is able to preserve rates of convergence up to a controlled error.
arXiv Detail & Related papers (2020-04-15T00:36:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.