Novel Deep Neural Network Classifier Characterization Metrics with Applications to Dataless Evaluation
- URL: http://arxiv.org/abs/2407.13000v1
- Date: Wed, 17 Jul 2024 20:40:46 GMT
- Title: Novel Deep Neural Network Classifier Characterization Metrics with Applications to Dataless Evaluation
- Authors: Nathaniel Dean, Dilip Sarkar,
- Abstract summary: In this work, we evaluate a Deep Neural Network (DNN) classifier's training quality without any example dataset.
Our empirical study of the proposed method for ResNet18, trained with CAFIR10 and CAFIR100 datasets, confirms that data-less evaluation of DNN classifiers is indeed possible.
- Score: 1.6574413179773757
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The mainstream AI community has seen a rise in large-scale open-source classifiers, often pre-trained on vast datasets and tested on standard benchmarks; however, users facing diverse needs and limited, expensive test data may be overwhelmed by available choices. Deep Neural Network (DNN) classifiers undergo training, validation, and testing phases using example dataset, with the testing phase focused on determining the classification accuracy of test examples without delving into the inner working of the classifier. In this work, we evaluate a DNN classifier's training quality without any example dataset. It is assumed that a DNN is a composition of a feature extractor and a classifier which is the penultimate completely connected layer. The quality of a classifier is estimated using its weight vectors. The feature extractor is characterized using two metrics that utilize feature vectors it produces when synthetic data is fed as input. These synthetic input vectors are produced by backpropagating desired outputs of the classifier. Our empirical study of the proposed method for ResNet18, trained with CAFIR10 and CAFIR100 datasets, confirms that data-less evaluation of DNN classifiers is indeed possible.
Related papers
- Fantastic DNN Classifiers and How to Identify them without Data [0.685316573653194]
We show that the quality of a trained DNN classifier can be assessed without any example data.
We have developed two metrics: one using the features of the prototypes and the other using adversarial examples corresponding to each prototype.
Empirical evaluations show that accuracy obtained from test examples is directly proportional to quality measures obtained from the proposed metrics.
arXiv Detail & Related papers (2023-05-24T20:54:48Z) - Explaining Cross-Domain Recognition with Interpretable Deep Classifier [100.63114424262234]
Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
arXiv Detail & Related papers (2022-11-15T15:58:56Z) - Mutual Information Learned Classifiers: an Information-theoretic
Viewpoint of Training Deep Learning Classification Systems [9.660129425150926]
Cross entropy loss can easily lead us to find models which demonstrate severe overfitting behavior.
In this paper, we prove that the existing cross entropy loss minimization for training DNN classifiers essentially learns the conditional entropy of the underlying data distribution.
We propose a mutual information learning framework where we train DNN classifiers via learning the mutual information between the label and input.
arXiv Detail & Related papers (2022-10-03T15:09:19Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Detecting Handwritten Mathematical Terms with Sensor Based Data [71.84852429039881]
We propose a solution to the UbiComp 2021 Challenge by Stabilo in which handwritten mathematical terms are supposed to be automatically classified.
The input data set contains data of different writers, with label strings constructed from a total of 15 different possible characters.
arXiv Detail & Related papers (2021-09-12T19:33:34Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Learning from Incomplete Features by Simultaneous Training of Neural
Networks and Sparse Coding [24.3769047873156]
This paper addresses the problem of training a classifier on a dataset with incomplete features.
We assume that different subsets of features (random or structured) are available at each data instance.
A new supervised learning method is developed to train a general classifier, using only a subset of features per sample.
arXiv Detail & Related papers (2020-11-28T02:20:39Z) - FIND: Human-in-the-Loop Debugging Deep Text Classifiers [55.135620983922564]
We propose FIND -- a framework which enables humans to debug deep learning text classifiers by disabling irrelevant hidden features.
Experiments show that by using FIND, humans can improve CNN text classifiers which were trained under different types of imperfect datasets.
arXiv Detail & Related papers (2020-10-10T12:52:53Z) - One Versus all for deep Neural Network Incertitude (OVNNI)
quantification [12.734278426543332]
We propose a new technique to quantify the epistemic uncertainty of data easily.
This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification.
arXiv Detail & Related papers (2020-06-01T14:06:12Z) - Robust Classification of High-Dimensional Spectroscopy Data Using Deep
Learning and Data Synthesis [0.5801044612920815]
A novel application of a locally-connected neural network (NN) for the binary classification of spectroscopy data is proposed.
A two-step classification process is presented as an alternative to the binary and one-class classification paradigms.
arXiv Detail & Related papers (2020-03-26T11:33:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.