FreaAI: Automated extraction of data slices to test machine learning
models
- URL: http://arxiv.org/abs/2108.05620v1
- Date: Thu, 12 Aug 2021 09:21:16 GMT
- Title: FreaAI: Automated extraction of data slices to test machine learning
models
- Authors: Samuel Ackerman, Orna Raz, Marcel Zalmanovici
- Abstract summary: We show the feasibility of automatically extracting feature models that result in explainable data slices over which the ML solution under-performs.
Our novel technique, IBM FreaAI aka FreaAI, extracts such slices from structured ML test data or any other labeled data.
- Score: 2.475112368179548
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) solutions are prevalent. However, many challenges exist
in making these solutions business-grade. One major challenge is to ensure that
the ML solution provides its expected business value. In order to do that, one
has to bridge the gap between the way ML model performance is measured and the
solution requirements. In previous work (Barash et al, "Bridging the gap...")
we demonstrated the effectiveness of utilizing feature models in bridging this
gap. Whereas ML performance metrics, such as the accuracy or F1-score of a
classifier, typically measure the average ML performance, feature models shed
light on explainable data slices that are too far from that average, and
therefore might indicate unsatisfied requirements. For example, the overall
accuracy of a bank text terms classifier may be very high, say $98\% \pm 2\%$,
yet it might perform poorly for terms that include short descriptions and
originate from commercial accounts. A business requirement, which may be
implicit in the training data, may be to perform well regardless of the type of
account and length of the description. Therefore, the under-performing data
slice that includes short descriptions and commercial accounts suggests
poorly-met requirements. In this paper we show the feasibility of automatically
extracting feature models that result in explainable data slices over which the
ML solution under-performs. Our novel technique, IBM FreaAI aka FreaAI,
extracts such slices from structured ML test data or any other labeled data. We
demonstrate that FreaAI can automatically produce explainable and
statistically-significant data slices over seven open datasets.
Related papers
- Matchmaker: Self-Improving Large Language Model Programs for Schema Matching [60.23571456538149]
We propose a compositional language model program for schema matching, comprised of candidate generation, refinement and confidence scoring.
Matchmaker self-improves in a zero-shot manner without the need for labeled demonstrations.
Empirically, we demonstrate on real-world medical schema matching benchmarks that Matchmaker outperforms previous ML-based approaches.
arXiv Detail & Related papers (2024-10-31T16:34:03Z) - LLM-Select: Feature Selection with Large Language Models [64.5099482021597]
Large language models (LLMs) are capable of selecting the most predictive features, with performance rivaling the standard tools of data science.
Our findings suggest that LLMs may be useful not only for selecting the best features for training but also for deciding which features to collect in the first place.
arXiv Detail & Related papers (2024-07-02T22:23:40Z) - Advancing Anomaly Detection: Non-Semantic Financial Data Encoding with LLMs [49.57641083688934]
We introduce a novel approach to anomaly detection in financial data using Large Language Models (LLMs) embeddings.
Our experiments demonstrate that LLMs contribute valuable information to anomaly detection as our models outperform the baselines.
arXiv Detail & Related papers (2024-06-05T20:19:09Z) - Can LLMs Separate Instructions From Data? And What Do We Even Mean By That? [60.50127555651554]
Large Language Models (LLMs) show impressive results in numerous practical applications, but they lack essential safety features.
This makes them vulnerable to manipulations such as indirect prompt injections and generally unsuitable for safety-critical tasks.
We introduce a formal measure for instruction-data separation and an empirical variant that is calculable from a model's outputs.
arXiv Detail & Related papers (2024-03-11T15:48:56Z) - Let's Predict Who Will Move to a New Job [0.0]
We discuss how machine learning is used to predict who will move to a new job.
Data is pre-processed into a suitable format for ML models.
Models are assessed using decision support metrics such as precision, recall, F1-Score, and accuracy.
arXiv Detail & Related papers (2023-09-15T11:43:09Z) - AI Model Disgorgement: Methods and Choices [127.54319351058167]
We introduce a taxonomy of possible disgorgement methods that are applicable to modern machine learning systems.
We investigate the meaning of "removing the effects" of data in the trained model in a way that does not require retraining from scratch.
arXiv Detail & Related papers (2023-04-07T08:50:18Z) - Privacy Adhering Machine Un-learning in NLP [66.17039929803933]
In real world industry use Machine Learning to build models on user data.
Such mandates require effort both in terms of data as well as model retraining.
continuous removal of data and model retraining steps do not scale.
We propose textitMachine Unlearning to tackle this challenge.
arXiv Detail & Related papers (2022-12-19T16:06:45Z) - ezDPS: An Efficient and Zero-Knowledge Machine Learning Inference
Pipeline [2.0813318162800707]
We propose ezDPS, a new efficient and zero-knowledge Machine Learning inference scheme.
ezDPS is a zkML pipeline in which the data is processed in multiple stages for high accuracy.
We show that ezDPS achieves one-to-three orders of magnitude more efficient than the generic circuit-based approach in all metrics.
arXiv Detail & Related papers (2022-12-11T06:47:28Z) - Classifier Data Quality: A Geometric Complexity Based Method for
Automated Baseline And Insights Generation [4.722075132982135]
A major challenge is to determine when the level of incorrectness, e.g., model accuracy or F1 score for classifiers, is acceptable.
We have developed complexity measures, which quantify how difficult given observations are to assign to their true class label.
These measures are superior to the best practice baseline in that, for a linear computation cost, they also quantify each observation' classification complexity in an explainable form.
arXiv Detail & Related papers (2021-12-22T12:17:08Z) - Machine Learning Model Drift Detection Via Weak Data Slices [5.319802998033767]
We propose a method that utilizes feature space rules, called data slices, for drift detection.
We provide experimental indications that our method is likely to identify that the ML model will likely change in performance, based on changes in the underlying data.
arXiv Detail & Related papers (2021-08-11T16:55:34Z) - Insights into Performance Fitness and Error Metrics for Machine Learning [1.827510863075184]
Machine learning (ML) is the field of training machines to achieve high level of cognition and perform human-like analysis.
This paper examines a number of the most commonly-used performance fitness and error metrics for regression and classification algorithms.
arXiv Detail & Related papers (2020-05-17T22:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.