Knee or ROC
- URL: http://arxiv.org/abs/2401.07390v1
- Date: Sun, 14 Jan 2024 23:25:44 GMT
- Title: Knee or ROC
- Authors: Veronica Wendt, Byunggu Yu, Caleb Kelly, and Junwhan Kim
- Abstract summary: Self-attention transformers have demonstrated accuracy for image classification with smaller data sets.
We consider calculating accuracy using the knee method to determine threshold values on an ad-hoc basis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-attention transformers have demonstrated accuracy for image
classification with smaller data sets. However, a limitation is that tests
to-date are based upon single class image detection with known representation
of image populations. For instances where the input image classes may be
greater than one and test sets that lack full information on representation of
image populations, accuracy calculations must adapt. The Receiver Operating
Characteristic (ROC) accuracy thresh-old can address the instances of
multi-class input images. However, this approach is unsuitable in instances
where image population representation is unknown. We consider calculating
accuracy using the knee method to determine threshold values on an ad-hoc
basis. Results of ROC curve and knee thresholds for a multi-class data set,
created from CIFAR-10 images, are discussed for multi-class image detection.
Related papers
- CROCODILE: Causality aids RObustness via COntrastive DIsentangled LEarning [8.975676404678374]
We introduce our CROCODILE framework, showing how tools from causality can foster a model's robustness to domain shift.
We apply our method to multi-label lung disease classification from CXRs, utilizing over 750000 images.
arXiv Detail & Related papers (2024-08-09T09:08:06Z) - Annotation Cost-Efficient Active Learning for Deep Metric Learning Driven Remote Sensing Image Retrieval [3.2109665109975696]
ANNEAL aims to create a small but informative training set made up of similar and dissimilar image pairs.
The informativeness of image pairs is evaluated by combining uncertainty and diversity criteria.
This way of annotating images significantly reduces the annotation cost compared to annotating images with land-use land-cover class labels.
arXiv Detail & Related papers (2024-06-14T15:08:04Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Discriminative Class Tokens for Text-to-Image Diffusion Models [107.98436819341592]
We propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text.
Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images.
We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier.
arXiv Detail & Related papers (2023-03-30T05:25:20Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Virus-MNIST: Machine Learning Baseline Calculations for Image
Classification [0.0]
The Virus-MNIST data set is a collection of thumbnail images that is similar in style to the ubiquitous MNIST hand-written digits.
It is poised to take on a role in benchmarking progress of virus model training.
arXiv Detail & Related papers (2021-11-03T17:44:23Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Radon cumulative distribution transform subspace modeling for image
classification [18.709734704950804]
We present a new supervised image classification method applicable to a broad class of image deformation models.
The method makes use of the previously described Radon Cumulative Distribution Transform (R-CDT) for image data.
In addition to the test accuracy performances, we show improvements in terms of computational efficiency.
arXiv Detail & Related papers (2020-04-07T19:47:26Z) - Overinterpretation reveals image classification model pathologies [15.950659318117694]
convolutional neural networks (CNNs) on popular benchmarks exhibit troubling pathologies that allow them to display high accuracy even in the absence of semantically salient features.
We demonstrate that neural networks trained on CIFAR-10 and ImageNet suffer from overinterpretation.
Although these patterns portend potential model fragility in real-world deployment, they are in fact valid statistical patterns of the benchmark that alone suffice to attain high test accuracy.
arXiv Detail & Related papers (2020-03-19T17:12:23Z) - I Am Going MAD: Maximum Discrepancy Competition for Comparing
Classifiers Adaptively [135.7695909882746]
We name the MAximum Discrepancy (MAD) competition.
We adaptively sample a small test set from an arbitrarily large corpus of unlabeled images.
Human labeling on the resulting model-dependent image sets reveals the relative performance of the competing classifiers.
arXiv Detail & Related papers (2020-02-25T03:32:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.