Decoupling Decision-Making in Fraud Prevention through Classifier
Calibration for Business Logic Action
- URL: http://arxiv.org/abs/2401.05240v2
- Date: Wed, 21 Feb 2024 20:56:39 GMT
- Title: Decoupling Decision-Making in Fraud Prevention through Classifier
Calibration for Business Logic Action
- Authors: Emanuele Luzio and Moacir Antonelli Ponti and Christian Ramirez
Arevalo and Luis Argerich
- Abstract summary: We use calibration strategies as strategy for decoupling machine learning (ML) classifiers from score-based actions within business logic frameworks.
Our findings highlight the trade-offs and performance implications of the approach.
In particular, the Isotonic and Beta calibration methods stand out for scenarios in which there is shift between training and testing data.
- Score: 1.8289218434318257
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning models typically focus on specific targets like creating
classifiers, often based on known population feature distributions in a
business context. However, models calculating individual features adapt over
time to improve precision, introducing the concept of decoupling: shifting from
point evaluation to data distribution. We use calibration strategies as
strategy for decoupling machine learning (ML) classifiers from score-based
actions within business logic frameworks. To evaluate these strategies, we
perform a comparative analysis using a real-world business scenario and
multiple ML models. Our findings highlight the trade-offs and performance
implications of the approach, offering valuable insights for practitioners
seeking to optimize their decoupling efforts. In particular, the Isotonic and
Beta calibration methods stand out for scenarios in which there is shift
between training and testing data.
Related papers
- Multiply Robust Estimation for Local Distribution Shifts with Multiple Domains [9.429772474335122]
We focus on scenarios where data distributions vary across multiple segments of the entire population.
We propose a two-stage multiply robust estimation method to improve model performance on each individual segment.
Our method is designed to be implemented with commonly used off-the-shelf machine learning models.
arXiv Detail & Related papers (2024-02-21T22:01:10Z) - Aggregation Weighting of Federated Learning via Generalization Bound
Estimation [65.8630966842025]
Federated Learning (FL) typically aggregates client model parameters using a weighting approach determined by sample proportions.
We replace the aforementioned weighting method with a new strategy that considers the generalization bounds of each local model.
arXiv Detail & Related papers (2023-11-10T08:50:28Z) - Automatic Generation of Attention Rules For Containment of Machine
Learning Model Errors [1.4987559345379062]
We present several algorithms (strategies') for determining optimal rules to separate observations.
In particular, we prefer strategies that use feature-based slicing because they are human-interpretable, model-agnostic, and require minimal supplementary inputs or knowledge.
To evaluate strategies, we introduce metrics to measure various desired qualities, such as their performance, stability, and generalizability to unseen data.
arXiv Detail & Related papers (2023-05-14T10:15:35Z) - CLIPood: Generalizing CLIP to Out-of-Distributions [73.86353105017076]
Contrastive language-image pre-training (CLIP) models have shown impressive zero-shot ability, but the further adaptation of CLIP on downstream tasks undesirably degrades OOD performances.
We propose CLIPood, a fine-tuning method that can adapt CLIP models to OOD situations where both domain shifts and open classes may occur on unseen test data.
Experiments on diverse datasets with different OOD scenarios show that CLIPood consistently outperforms existing generalization techniques.
arXiv Detail & Related papers (2023-02-02T04:27:54Z) - An Information-Theoretic Approach for Estimating Scenario Generalization
in Crowd Motion Prediction [27.10815774845461]
We propose a novel scoring method, which characterizes generalization of models trained on source crowd scenarios and applied to target crowd scenarios.
The Interaction component aims to characterize the difficulty of scenario domains, while the diversity of a scenario domain is captured in the Diversity score.
Our experimental results validate the efficacy of the proposed method on several simulated and real-world (source,target) generalization tasks.
arXiv Detail & Related papers (2022-11-02T01:39:30Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Learning from Heterogeneous Data Based on Social Interactions over
Graphs [58.34060409467834]
This work proposes a decentralized architecture, where individual agents aim at solving a classification problem while observing streaming features of different dimensions.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
We show that the.
strategy enables the agents to learn consistently under this highly-heterogeneous setting.
arXiv Detail & Related papers (2021-12-17T12:47:18Z) - Linear Classifiers that Encourage Constructive Adaptation [6.324366770332667]
We study the dynamics of prediction and adaptation as a two-stage game, and characterize optimal strategies for the model designer and its decision subjects.
In benchmarks on simulated and real-world datasets, we find that classifiers trained using our method maintain the accuracy of existing approaches while inducing higher levels of improvement and less manipulation.
arXiv Detail & Related papers (2020-10-31T20:35:32Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.