Learning Label Encodings for Deep Regression
- URL: http://arxiv.org/abs/2303.02273v1
- Date: Sat, 4 Mar 2023 00:11:34 GMT
- Title: Learning Label Encodings for Deep Regression
- Authors: Deval Shah and Tor M. Aamodt
- Abstract summary: Deep regression networks are widely used to tackle the problem of predicting a continuous value for a given input.
The space of label encodings for regression is large.
This paper introduces Regularized Label Learning (RLEL) for end-to-end training of an entire network and its label encoding.
- Score: 10.02230163797581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep regression networks are widely used to tackle the problem of predicting
a continuous value for a given input. Task-specialized approaches for training
regression networks have shown significant improvement over generic approaches,
such as direct regression. More recently, a generic approach based on
regression by binary classification using binary-encoded labels has shown
significant improvement over direct regression. The space of label encodings
for regression is large. Lacking heretofore have been automated approaches to
find a good label encoding for a given application. This paper introduces
Regularized Label Encoding Learning (RLEL) for end-to-end training of an entire
network and its label encoding. RLEL provides a generic approach for tackling
regression. Underlying RLEL is our observation that the search space of label
encodings can be constrained and efficiently explored by using a continuous
search space of real-valued label encodings combined with a regularization
function designed to encourage encodings with certain properties. These
properties balance the probability of classification error in individual bits
against error correction capability. Label encodings found by RLEL result in
lower or comparable errors to manually designed label encodings. Applying RLEL
results in 10.9% and 12.4% improvement in Mean Absolute Error (MAE) over direct
regression and multiclass classification, respectively. Our evaluation
demonstrates that RLEL can be combined with off-the-shelf feature extractors
and is suitable across different architectures, datasets, and tasks. Code is
available at https://github.com/ubc-aamodt-group/RLEL_regression.
Related papers
- LFFR: Logistic Function For (single-output) Regression [0.0]
We implement privacy-preserving regression training using data encrypted under a fully homomorphic encryption scheme.
We develop a novel and efficient algorithm called LFFR for homomorphic regression using the logistic function.
arXiv Detail & Related papers (2024-07-13T17:33:49Z) - Robust Capped lp-Norm Support Vector Ordinal Regression [85.84718111830752]
Ordinal regression is a specialized supervised problem where the labels show an inherent order.
Support Vector Ordinal Regression, as an outstanding ordinal regression model, is widely used in many ordinal regression tasks.
We introduce a new model, Capped $ell_p$-Norm Support Vector Ordinal Regression(CSVOR), that is robust to outliers.
arXiv Detail & Related papers (2024-04-25T13:56:05Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - GEC-DePenD: Non-Autoregressive Grammatical Error Correction with
Decoupled Permutation and Decoding [52.14832976759585]
Grammatical error correction (GEC) is an important NLP task that is usually solved with autoregressive sequence-to-sequence models.
We propose a novel non-autoregressive approach to GEC that decouples the architecture into a permutation network.
We show that the resulting network improves over previously known non-autoregressive methods for GEC.
arXiv Detail & Related papers (2023-11-14T14:24:36Z) - Regularized Linear Regression for Binary Classification [20.710343135282116]
Regularized linear regression is a promising approach for binary classification problems in which the training set has noisy labels.
We show that for large enough regularization strength, the optimal weights concentrate around two values of opposite sign.
We observe that in many cases the corresponding "compression" of each weight to a single bit leads to very little loss in performance.
arXiv Detail & Related papers (2023-11-03T23:18:21Z) - Generating Unbiased Pseudo-labels via a Theoretically Guaranteed
Chebyshev Constraint to Unify Semi-supervised Classification and Regression [57.17120203327993]
threshold-to-pseudo label process (T2L) in classification uses confidence to determine the quality of label.
In nature, regression also requires unbiased methods to generate high-quality labels.
We propose a theoretically guaranteed constraint for generating unbiased labels based on Chebyshev's inequality.
arXiv Detail & Related papers (2023-11-03T08:39:35Z) - Deep Imbalanced Regression via Hierarchical Classification Adjustment [50.19438850112964]
Regression tasks in computer vision are often formulated into classification by quantizing the target space into classes.
The majority of training samples lie in a head range of target values, while a minority of samples span a usually larger tail range.
We propose to construct hierarchical classifiers for solving imbalanced regression tasks.
Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks.
arXiv Detail & Related papers (2023-10-26T04:54:39Z) - Label Encoding for Regression Networks [9.386028796990399]
We introduce binary-encoded labels (BEL), which generalizes the application of binary classification to regression.
BEL achieves state-of-the-art accuracies for several regression benchmarks.
arXiv Detail & Related papers (2022-12-04T21:23:36Z) - Robust Neural Network Classification via Double Regularization [2.41710192205034]
We propose a novel double regularization of the neural network training loss that combines a penalty on the complexity of the classification model and an optimal reweighting of training observations.
We demonstrate DRFit, for neural net classification of (i) MNIST and (ii) CIFAR-10, in both cases with simulated mislabeling.
arXiv Detail & Related papers (2021-12-15T13:19:20Z) - Fast OSCAR and OWL Regression via Safe Screening Rules [97.28167655721766]
Ordered $L_1$ (OWL) regularized regression is a new regression analysis for high-dimensional sparse learning.
Proximal gradient methods are used as standard approaches to solve OWL regression.
We propose the first safe screening rule for OWL regression by exploring the order of the primal solution with the unknown order structure.
arXiv Detail & Related papers (2020-06-29T23:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.