QuantifyML: How Good is my Machine Learning Model?
- URL: http://arxiv.org/abs/2110.12588v1
- Date: Mon, 25 Oct 2021 01:56:01 GMT
- Title: QuantifyML: How Good is my Machine Learning Model?
- Authors: Muhammad Usman (University of Texas at Austin, USA), Divya Gopinath
(KBR Inc., CMU, Nasa Ames), Corina S. P\u{a}s\u{a}reanu (KBR Inc., CMU, Nasa
Ames)
- Abstract summary: QuantifyML aims to quantify the extent to which machine learning models have learned and generalized from the given data.
The formula is analyzed with off-the-shelf model counters to obtain precise counts with respect to different model behavior.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The efficacy of machine learning models is typically determined by computing
their accuracy on test data sets. However, this may often be misleading, since
the test data may not be representative of the problem that is being studied.
With QuantifyML we aim to precisely quantify the extent to which machine
learning models have learned and generalized from the given data. Given a
trained model, QuantifyML translates it into a C program and feeds it to the
CBMC model checker to produce a formula in Conjunctive Normal Form (CNF). The
formula is analyzed with off-the-shelf model counters to obtain precise counts
with respect to different model behavior. QuantifyML enables i) evaluating
learnability by comparing the counts for the outputs to ground truth, expressed
as logical predicates, ii) comparing the performance of models built with
different machine learning algorithms (decision-trees vs. neural networks), and
iii) quantifying the safety and robustness of models.
Related papers
- Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - CodeGen2: Lessons for Training LLMs on Programming and Natural Languages [116.74407069443895]
We unify encoder and decoder-based models into a single prefix-LM.
For learning methods, we explore the claim of a "free lunch" hypothesis.
For data distributions, the effect of a mixture distribution and multi-epoch training of programming and natural languages on model performance is explored.
arXiv Detail & Related papers (2023-05-03T17:55:25Z) - TRAK: Attributing Model Behavior at Scale [79.56020040993947]
We present TRAK (Tracing with Randomly-trained After Kernel), a data attribution method that is both effective and computationally tractable for large-scale, differenti models.
arXiv Detail & Related papers (2023-03-24T17:56:22Z) - Generalization Analysis on Learning with a Concurrent Verifier [16.298786827265673]
We analyze how the learnability of a machine learning model changes with a CV.
We show that typical error bounds based on Rademacher complexity will be no larger than that of the original model.
arXiv Detail & Related papers (2022-10-11T10:51:55Z) - Benchmarking Learning Efficiency in Deep Reservoir Computing [23.753943709362794]
We introduce a benchmark of increasingly difficult tasks together with a data efficiency metric to measure how quickly machine learning models learn from training data.
We compare the learning speed of some established sequential supervised models, such as RNNs, LSTMs, or Transformers, with relatively less known alternative models based on reservoir computing.
arXiv Detail & Related papers (2022-09-29T08:16:52Z) - Multifamily Malware Models [5.414308305392762]
We conduct experiments based on byte $n$-gram features to quantify the relationship between the generality of the training dataset and the accuracy of the corresponding machine learning models.
We find that neighborhood-based algorithms generalize surprisingly well, far outperforming the other machine learning techniques considered.
arXiv Detail & Related papers (2022-06-27T13:06:31Z) - Deep Learning Models for Knowledge Tracing: Review and Empirical
Evaluation [2.423547527175807]
We review and evaluate a body of deep learning knowledge tracing (DLKT) models with openly available and widely-used data sets.
The evaluated DLKT models have been reimplemented for assessing and replicability of previously reported results.
arXiv Detail & Related papers (2021-12-30T14:19:27Z) - Classifier Data Quality: A Geometric Complexity Based Method for
Automated Baseline And Insights Generation [4.722075132982135]
A major challenge is to determine when the level of incorrectness, e.g., model accuracy or F1 score for classifiers, is acceptable.
We have developed complexity measures, which quantify how difficult given observations are to assign to their true class label.
These measures are superior to the best practice baseline in that, for a linear computation cost, they also quantify each observation' classification complexity in an explainable form.
arXiv Detail & Related papers (2021-12-22T12:17:08Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Distilling Interpretable Models into Human-Readable Code [71.11328360614479]
Human-readability is an important and desirable standard for machine-learned model interpretability.
We propose to train interpretable models using conventional methods, and then distill them into concise, human-readable code.
We describe a piecewise-linear curve-fitting algorithm that produces high-quality results efficiently and reliably across a broad range of use cases.
arXiv Detail & Related papers (2021-01-21T01:46:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.