Tree Boosting Methods for Balanced andImbalanced Classification and their Robustness Over Time in Risk Assessment
- URL: http://arxiv.org/abs/2504.18133v1
- Date: Fri, 25 Apr 2025 07:35:38 GMT
- Title: Tree Boosting Methods for Balanced andImbalanced Classification and their Robustness Over Time in Risk Assessment
- Authors: Gissel Velarde, Michael Weichert, Anuj Deshmunkh, Sanjay Deshmane, Anindya Sudhir, Khushboo Sharma, Vaibhav Joshi,
- Abstract summary: Tree-based methods such as XGBoost, stand out in several benchmarks due to detection performance and speed.<n>The developed method increases its recognition performance as more data is given for training.<n>It is still significantly superior to the baseline of precision-recall determined by the ratio of positives divided by positives and negatives.
- Score: 0.10925516251778125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most real-world classification problems deal with imbalanced datasets, posing a challenge for Artificial Intelligence (AI), i.e., machine learning algorithms, because the minority class, which is of extreme interest, often proves difficult to be detected. This paper empirically evaluates tree boosting methods' performance given different dataset sizes and class distributions, from perfectly balanced to highly imbalanced. For tabular data, tree-based methods such as XGBoost, stand out in several benchmarks due to detection performance and speed. Therefore, XGBoost and Imbalance-XGBoost are evaluated. After introducing the motivation to address risk assessment with machine learning, the paper reviews evaluation metrics for detection systems or binary classifiers. It proposes a method for data preparation followed by tree boosting methods including hyper-parameter optimization. The method is evaluated on private datasets of 1 thousand (K), 10K and 100K samples on distributions with 50, 45, 25, and 5 percent positive samples. As expected, the developed method increases its recognition performance as more data is given for training and the F1 score decreases as the data distribution becomes more imbalanced, but it is still significantly superior to the baseline of precision-recall determined by the ratio of positives divided by positives and negatives. Sampling to balance the training set does not provide consistent improvement and deteriorates detection. In contrast, classifier hyper-parameter optimization improves recognition, but should be applied carefully depending on data volume and distribution. Finally, the developed method is robust to data variation over time up to some point. Retraining can be used when performance starts deteriorating.
Related papers
- DRoP: Distributionally Robust Data Pruning [11.930434318557156]
We conduct the first systematic study of the impact of data pruning on classification bias of trained models.<n>We propose DRoP, a distributionally robust approach to pruning and empirically demonstrate its performance on standard computer vision benchmarks.
arXiv Detail & Related papers (2024-04-08T14:55:35Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Evaluating XGBoost for Balanced and Imbalanced Data: Application to
Fraud Detection [0.0]
This paper evaluates XGboost's performance given different dataset sizes and class distributions.
XGBoost has been selected for evaluation, as it stands out in several benchmarks due to its detection performance and speed.
arXiv Detail & Related papers (2023-03-27T13:59:22Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Revisiting Long-tailed Image Classification: Survey and Benchmarks with
New Evaluation Metrics [88.39382177059747]
A corpus of metrics is designed for measuring the accuracy, robustness, and bounds of algorithms for learning with long-tailed distribution.
Based on our benchmarks, we re-evaluate the performance of existing methods on CIFAR10 and CIFAR100 datasets.
arXiv Detail & Related papers (2023-02-03T02:40:54Z) - Experimenting with an Evaluation Framework for Imbalanced Data Learning
(EFIDL) [9.010643838773477]
Data imbalance is one of the crucial issues in big data analysis with fewer labels.
Many data balance methods were introduced to improve machine learning algorithms' performance.
We proposed, a new evaluation framework for imbalanced data learning methods.
arXiv Detail & Related papers (2023-01-26T01:16:02Z) - A Case Study on the Classification of Lost Circulation Events During
Drilling using Machine Learning Techniques on an Imbalanced Large Dataset [0.0]
We utilize a 65,000+ records data with class imbalance problem from Azadegan oilfield formations in Iran.
Eleven of the dataset's seventeen parameters are chosen to be used in the classification of five lost circulation events.
To generate classification models, we used six basic machine learning algorithms and four ensemble learning methods.
arXiv Detail & Related papers (2022-09-04T12:28:40Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.