Prescriptive and Descriptive Approaches to Machine-Learning Transparency
- URL: http://arxiv.org/abs/2204.13582v1
- Date: Wed, 27 Apr 2022 15:26:50 GMT
- Title: Prescriptive and Descriptive Approaches to Machine-Learning Transparency
- Authors: David Adkins, Bilal Alsallakh, Adeel Cheema, Narine Kokhlikyan, Emily
McReynolds, Pushkar Mishra, Chavez Procope, Jeremy Sawruk, Erin Wang, Polina
Zvyagina
- Abstract summary: We propose a preliminary approach, called Method Cards, which aims to increase the transparency and prescriptive documentation of machine-learning systems.
We showcase our proposal with an example in small object detection, and demonstrate how Method Cards can communicate key considerations for model developers.
- Score: 5.040810032102723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Specialized documentation techniques have been developed to communicate key
facts about machine-learning (ML) systems and the datasets and models they rely
on. Techniques such as Datasheets, FactSheets, and Model Cards have taken a
mainly descriptive approach, providing various details about the system
components. While the above information is essential for product developers and
external experts to assess whether the ML system meets their requirements,
other stakeholders might find it less actionable. In particular, ML engineers
need guidance on how to mitigate potential shortcomings in order to fix bugs or
improve the system's performance. We survey approaches that aim to provide such
guidance in a prescriptive way. We further propose a preliminary approach,
called Method Cards, which aims to increase the transparency and
reproducibility of ML systems by providing prescriptive documentation of
commonly-used ML methods and techniques. We showcase our proposal with an
example in small object detection, and demonstrate how Method Cards can
communicate key considerations for model developers. We further highlight
avenues for improving the user experience of ML engineers based on Method
Cards.
Related papers
- Knowledge Distillation-Based Model Extraction Attack using GAN-based Private Counterfactual Explanations [1.6576983459630268]
We focus on investigating how model explanations, particularly counterfactual explanations, can be exploited for performing MEA within the ML platform.
We propose a novel approach for MEA based on Knowledge Distillation (KD) to enhance the efficiency of extracting a substitute model.
We also assess the effectiveness of differential privacy (DP) as a mitigation strategy.
arXiv Detail & Related papers (2024-04-04T10:28:55Z) - Interpretable and Explainable Machine Learning Methods for Predictive
Process Monitoring: A Systematic Literature Review [1.3812010983144802]
This paper presents a systematic review on the explainability and interpretability of machine learning (ML) models within the context of predictive process mining.
We provide a comprehensive overview of the current methodologies and their applications across various application domains.
Our findings aim to equip researchers and practitioners with a deeper understanding of how to develop and implement more trustworthy, transparent, and effective intelligent systems for process analytics.
arXiv Detail & Related papers (2023-12-29T12:43:43Z) - Identifying Concerns When Specifying Machine Learning-Enabled Systems: A
Perspective-Based Approach [1.2184324428571227]
PerSpecML is a perspective-based approach for specifying ML-enabled systems.
It helps practitioners identify which attributes, including ML and non-ML components, are important to contribute to the overall system's quality.
arXiv Detail & Related papers (2023-09-14T18:31:16Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - Towards Perspective-Based Specification of Machine Learning-Enabled
Systems [1.3406258114080236]
This paper describes our work towards a perspective-based approach for specifying ML-enabled systems.
The approach involves analyzing a set of 45 ML concerns grouped into five perspectives: objectives, user experience, infrastructure, model, and data.
The main contribution of this paper is to provide two new artifacts that can be used to help specifying ML-enabled systems.
arXiv Detail & Related papers (2022-06-20T13:09:23Z) - Retrieval-Enhanced Machine Learning [110.5237983180089]
We describe a generic retrieval-enhanced machine learning framework, which includes a number of existing models as special cases.
REML challenges information retrieval conventions, presenting opportunities for novel advances in core areas, including optimization.
REML research agenda lays a foundation for a new style of information access research and paves a path towards advancing machine learning and artificial intelligence.
arXiv Detail & Related papers (2022-05-02T21:42:45Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z) - Practical Machine Learning Safety: A Survey and Primer [81.73857913779534]
Open-world deployment of Machine Learning algorithms in safety-critical applications such as autonomous vehicles needs to address a variety of ML vulnerabilities.
New models and training techniques to reduce generalization error, achieve domain adaptation, and detect outlier examples and adversarial attacks.
Our organization maps state-of-the-art ML techniques to safety strategies in order to enhance the dependability of the ML algorithm from different aspects.
arXiv Detail & Related papers (2021-06-09T05:56:42Z) - Learning by Design: Structuring and Documenting the Human Choices in
Machine Learning Development [6.903929927172917]
We present a method consisting of eight design questions that outline the deliberation and normative choices going into creating a machine learning model.
Our method affords several benefits, such as supporting critical assessment through methodological transparency.
We believe that our method can help ML practitioners structure and justify their choices and assumptions when developing ML models.
arXiv Detail & Related papers (2021-05-03T08:47:45Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.