AI Total: Analyzing Security ML Models with Imperfect Data in Production
- URL: http://arxiv.org/abs/2110.07028v1
- Date: Wed, 13 Oct 2021 20:56:05 GMT
- Title: AI Total: Analyzing Security ML Models with Imperfect Data in Production
- Authors: Awalin Sopan and Konstantin Berlin
- Abstract summary: Development of new machine learning models is typically done on manually curated data sets.
We develop a web-based visualization system that allows the users to quickly gather headline performance numbers.
It also enables the users to immediately observe the root cause of an issue when something goes wrong.
- Score: 2.629585075202626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Development of new machine learning models is typically done on manually
curated data sets, making them unsuitable for evaluating the models'
performance during operations, where the evaluation needs to be performed
automatically on incoming streams of new data. Unfortunately, pure reliance on
a fully automatic pipeline for monitoring model performance makes it difficult
to understand if any observed performance issues are due to model performance,
pipeline issues, emerging data distribution biases, or some combination of the
above. With this in mind, we developed a web-based visualization system that
allows the users to quickly gather headline performance numbers while
maintaining confidence that the underlying data pipeline is functioning
properly. It also enables the users to immediately observe the root cause of an
issue when something goes wrong. We introduce a novel way to analyze
performance under data issues using a data coverage equalizer. We describe the
various modifications and additional plots, filters, and drill-downs that we
added on top of the standard evaluation metrics typically tracked in machine
learning (ML) applications, and walk through some real world examples that
proved valuable for introspecting our models.
Related papers
- AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - An Integrated Data Processing Framework for Pretraining Foundation Models [57.47845148721817]
Researchers and practitioners often have to manually curate datasets from difference sources.
We propose a data processing framework that integrates a Processing Module and an Analyzing Module.
The proposed framework is easy to use and highly flexible.
arXiv Detail & Related papers (2024-02-26T07:22:51Z) - AttributionScanner: A Visual Analytics System for Model Validation with Metadata-Free Slice Finding [29.07617945233152]
Data slice finding is an emerging technique for validating machine learning (ML) models by identifying and analyzing subgroups in a dataset that exhibit poor performance.
This approach faces significant challenges, including the laborious and costly requirement for additional metadata.
We introduce AttributionScanner, an innovative human-in-the-loop Visual Analytics (VA) system, designed for metadata-free data slice finding.
Our system identifies interpretable data slices that involve common model behaviors and visualizes these patterns through an Attribution Mosaic design.
arXiv Detail & Related papers (2024-01-12T09:17:32Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Generalizable Error Modeling for Search Relevance Data Annotation Tasks [0.0]
Human data annotation is critical in shaping the quality of machine learning (ML) and artificial intelligence (AI) systems.
One significant challenge in this context is posed by annotation errors, as their effects can degrade the performance of ML models.
This paper presents a predictive error model trained to detect potential errors in search relevance annotation tasks for three industry-scale ML applications.
arXiv Detail & Related papers (2023-10-08T21:21:19Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Exploring the Efficacy of Automatically Generated Counterfactuals for
Sentiment Analysis [17.811597734603144]
We propose an approach to automatically generating counterfactual data for data augmentation and explanation.
A comprehensive evaluation on several different datasets and using a variety of state-of-the-art benchmarks demonstrate how our approach can achieve significant improvements in model performance.
arXiv Detail & Related papers (2021-06-29T10:27:01Z) - MAIN: Multihead-Attention Imputation Networks [4.427447378048202]
We propose a novel mechanism based on multi-head attention which can be applied effortlessly in any model.
Our method inductively models patterns of missingness in the input data in order to increase the performance of the downstream task.
arXiv Detail & Related papers (2021-02-10T13:50:02Z) - How Training Data Impacts Performance in Learning-based Control [67.7875109298865]
This paper derives an analytical relationship between the density of the training data and the control performance.
We formulate a quality measure for the data set, which we refer to as $rho$-gap.
We show how the $rho$-gap can be applied to a feedback linearizing control law.
arXiv Detail & Related papers (2020-05-25T12:13:49Z) - Self-Updating Models with Error Remediation [0.5156484100374059]
We propose a framework, Self-Updating Models with Error Remediation (SUMER), in which a deployed model updates itself as new data becomes available.
A key component of SUMER is the notion of error remediation as self-labeled data can be susceptible to the propagation of errors.
We find that self-updating models (SUMs) generally perform better than models that do not attempt to self-update when presented with additional previously-unseen data.
arXiv Detail & Related papers (2020-05-19T23:09:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.