Emphasis on the Minimization of False Negatives or False Positives in
Binary Classification
- URL: http://arxiv.org/abs/2204.02526v1
- Date: Wed, 6 Apr 2022 00:33:40 GMT
- Title: Emphasis on the Minimization of False Negatives or False Positives in
Binary Classification
- Authors: Sanskriti Singh
- Abstract summary: A new method is introduced to reduce the False Negatives or False positives without drastically changing the overall performance or F1 score of the model.
This method involves the careful change to the real value of the input after pre-training the model.
In all the models, an increase in the recall or precision, minimization of False Negatives or False Positives respectively, was shown without a large drop in F1 score.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The minimization of specific cases in binary classification, such as false
negatives or false positives, grows increasingly important as humans begin to
implement more machine learning into current products. While there are a few
methods to put a bias towards the reduction of specific cases, these methods
aren't very effective, hence their minimal use in models. To this end, a new
method is introduced to reduce the False Negatives or False positives without
drastically changing the overall performance or F1 score of the model. This
method involving the careful change to the real value of the input after
pre-training the model. Presenting the results of this method being applied on
various datasets, some being more complex than others. Through experimentation
on multiple model architectures on these datasets, the best model was found. In
all the models, an increase in the recall or precision, minimization of False
Negatives or False Positives respectively, was shown without a large drop in F1
score.
Related papers
- Correct and Weight: A Simple Yet Effective Loss for Implicit Feedback Recommendation [36.820719132176315]
This paper introduces a novel and principled loss function, named Corrected and Weighted (CW) loss.<n>CW loss systematically corrects for the impact of false negatives within the training objective.<n> experiments conducted on four large-scale, sparse benchmark datasets demonstrate the superiority of our proposed loss.
arXiv Detail & Related papers (2026-01-07T15:20:27Z) - AMUN: Adversarial Machine UNlearning [13.776549741449557]
Adversarial Machine UNlearning (AMUN) outperforms prior state-of-the-art (SOTA) methods for image classification.
AMUN lowers the confidence of the model on the forget samples by fine-tuning the model on their corresponding adversarial examples.
arXiv Detail & Related papers (2025-03-02T14:36:31Z) - Examining False Positives under Inference Scaling for Mathematical Reasoning [59.19191774050967]
This paper systematically examines the prevalence of false positive solutions in mathematical problem solving for language models.
We explore how false positives influence the inference time scaling behavior of language models.
arXiv Detail & Related papers (2025-02-10T07:49:35Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Decoupled Prototype Learning for Reliable Test-Time Adaptation [50.779896759106784]
Test-time adaptation (TTA) is a task that continually adapts a pre-trained source model to the target domain during inference.
One popular approach involves fine-tuning model with cross-entropy loss according to estimated pseudo-labels.
This study reveals that minimizing the classification error of each sample causes the cross-entropy loss's vulnerability to label noise.
We propose a novel Decoupled Prototype Learning (DPL) method that features prototype-centric loss computation.
arXiv Detail & Related papers (2024-01-15T03:33:39Z) - Anomaly detection optimization using big data and deep learning to
reduce false-positive [0.0]
Anomaly-based Intrusion Detection System (IDS) has been a hot research topic because of its ability to detect new threats.
The high false-positive rate is the reason why anomaly IDS is not commonly applied in practice.
This research paper proposes applying deep model instead of traditional models because it has more ability to generalize.
arXiv Detail & Related papers (2022-09-28T09:52:26Z) - ELODI: Ensemble Logit Difference Inhibition for Positive-Congruent Training [110.52785254565518]
Existing methods to reduce the negative flip rate (NFR) either do so at the expense of overall accuracy by forcing a new model to imitate the old models, or use ensembles.
We analyze the role of ensembles in reducing NFR and observe that they remove negative flips that are typically not close to the decision boundary.
We present a method, called Ensemble Logit Difference Inhibition (ELODI), to train a classification system that achieves paragon performance in both error rate and NFR.
arXiv Detail & Related papers (2022-05-12T17:59:56Z) - Stay Positive: Knowledge Graph Embedding Without Negative Sampling [1.8275108630751844]
We propose a training procedure that obviates the need for negative sampling by adding a novel regularization term to the loss function.
Our results for two relational embedding models (DistMult and SimplE) show the merit of our proposal both in terms of performance and speed.
arXiv Detail & Related papers (2022-01-07T20:09:27Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Positive-Congruent Training: Towards Regression-Free Model Updates [87.25247195148187]
In image classification, sample-wise inconsistencies appear as "negative flips"
A new model incorrectly predicts the output for a test sample that was correctly classified by the old (reference) model.
We propose a simple approach for PC training, Focal Distillation, which enforces congruence with the reference model.
arXiv Detail & Related papers (2020-11-18T09:00:44Z) - Monotonicity in practice of adaptive testing [0.0]
This article evaluates Bayesian network models used for computerized adaptive testing and learned with a recently proposed monotonicity gradient algorithm.
The quality of methods is empirically evaluated on a large data set of the Czech National Mathematics exam.
arXiv Detail & Related papers (2020-09-15T10:55:41Z) - SCE: Scalable Network Embedding from Sparsest Cut [20.08464038805681]
Large-scale network embedding is to learn a latent representation for each node in an unsupervised manner.
A key of success to such contrastive learning methods is how to draw positive and negative samples.
In this paper, we propose SCE for unsupervised network embedding only using negative samples for training.
arXiv Detail & Related papers (2020-06-30T03:18:15Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.