Model Optimization in Imbalanced Regression
- URL: http://arxiv.org/abs/2206.09991v1
- Date: Mon, 20 Jun 2022 20:23:56 GMT
- Title: Model Optimization in Imbalanced Regression
- Authors: An\'ibal Silva, Rita P. Ribeiro, and Nuno Moniz
- Abstract summary: Imbalanced domain learning aims to produce accurate models in predicting instances that, though underrepresented, are of utmost importance for the domain.
One of the main reasons for this is the lack of loss functions capable of focusing on minimizing the errors of extreme (rare) values.
Recently, an evaluation metric was introduced: Squared Error Relevance Area (SERA)
This metric posits a bigger emphasis on the errors committed at extreme values while also accounting for the performance in the overall target variable domain.
- Score: 2.580765958706854
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Imbalanced domain learning aims to produce accurate models in predicting
instances that, though underrepresented, are of utmost importance for the
domain. Research in this field has been mainly focused on classification tasks.
Comparatively, the number of studies carried out in the context of regression
tasks is negligible. One of the main reasons for this is the lack of loss
functions capable of focusing on minimizing the errors of extreme (rare)
values. Recently, an evaluation metric was introduced: Squared Error Relevance
Area (SERA). This metric posits a bigger emphasis on the errors committed at
extreme values while also accounting for the performance in the overall target
variable domain, thus preventing severe bias. However, its effectiveness as an
optimization metric is unknown. In this paper, our goal is to study the impacts
of using SERA as an optimization criterion in imbalanced regression tasks.
Using gradient boosting algorithms as proof of concept, we perform an
experimental study with 36 data sets of different domains and sizes. Results
show that models that used SERA as an objective function are practically better
than the models produced by their respective standard boosting algorithms at
the prediction of extreme values. This confirms that SERA can be embedded as a
loss function into optimization-based learning algorithms for imbalanced
regression scenarios.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - Automatic debiasing of neural networks via moment-constrained learning [0.0]
Naively learning the regression function and taking a sample mean of the target functional results in biased estimators.
We propose moment-constrained learning as a new RR learning approach that addresses some shortcomings in automatic debiasing.
arXiv Detail & Related papers (2024-09-29T20:56:54Z) - Evaluating Mathematical Reasoning Beyond Accuracy [50.09931172314218]
We introduce ReasonEval, a new methodology for evaluating the quality of reasoning steps.
We show that ReasonEval achieves state-of-the-art performance on human-labeled datasets.
We observe that ReasonEval can play a significant role in data selection.
arXiv Detail & Related papers (2024-04-08T17:18:04Z) - Target Variable Engineering [0.0]
We compare the predictive performance of regression models trained to predict numeric targets vs. classifiers trained to predict their binarized counterparts.
We find that regression requires significantly more computational effort to converge upon the optimal performance.
arXiv Detail & Related papers (2023-10-13T23:12:21Z) - On the Efficacy of Generalization Error Prediction Scoring Functions [33.24980750651318]
Generalization error predictors (GEPs) aim to predict model performance on unseen distributions by deriving dataset-level error estimates from sample-level scores.
We rigorously study the effectiveness of popular scoring functions (confidence, local manifold smoothness, model agreement) independent of mechanism choice.
arXiv Detail & Related papers (2023-03-23T18:08:44Z) - A Computational Exploration of Emerging Methods of Variable Importance
Estimation [0.0]
Estimating the importance of variables is an essential task in modern machine learning.
We propose a computational and theoretical exploration of the emerging methods of variable importance estimation.
The implementation has shown that PERF has the best performance in the case of highly correlated data.
arXiv Detail & Related papers (2022-08-05T20:00:56Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing
Regressions In NLP Model Updates [68.09049111171862]
This work focuses on quantifying, reducing and analyzing regression errors in the NLP model updates.
We formulate the regression-free model updates into a constrained optimization problem.
We empirically analyze how model ensemble reduces regression.
arXiv Detail & Related papers (2021-05-07T03:33:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.