Evaluating a Methodology for Increasing AI Transparency: A Case Study
- URL: http://arxiv.org/abs/2201.13224v2
- Date: Tue, 12 Mar 2024 15:46:43 GMT
- Title: Evaluating a Methodology for Increasing AI Transparency: A Case Study
- Authors: David Piorkowski, John Richards, Michael Hind
- Abstract summary: Given growing concerns about the potential harms of artificial intelligence, societies have begun to demand more transparency about how AI models and systems are created and used.
To address these concerns, several efforts have proposed documentation templates containing questions to be answered by model developers.
No single template can cover the needs of diverse documentation consumers.
- Score: 8.265282762929509
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In reaction to growing concerns about the potential harms of artificial
intelligence (AI), societies have begun to demand more transparency about how
AI models and systems are created and used. To address these concerns, several
efforts have proposed documentation templates containing questions to be
answered by model developers. These templates provide a useful starting point,
but no single template can cover the needs of diverse documentation consumers.
It is possible in principle, however, to create a repeatable methodology to
generate truly useful documentation. Richards et al. [25] proposed such a
methodology for identifying specific documentation needs and creating templates
to address those needs. Although this is a promising proposal, it has not been
evaluated.
This paper presents the first evaluation of this user-centered methodology in
practice, reporting on the experiences of a team in the domain of AI for
healthcare that adopted it to increase transparency for several AI models. The
methodology was found to be usable by developers not trained in user-centered
techniques, guiding them to creating a documentation template that addressed
the specific needs of their consumers while still being reusable across
different models and use cases. Analysis of the benefits and costs of this
methodology are reviewed and suggestions for further improvement in both the
methodology and supporting tools are summarized.
Related papers
- Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Establishing Knowledge Preference in Language Models [80.70632813935644]
Language models are known to encode a great amount of factual knowledge through pretraining.
Such knowledge might be insufficient to cater to user requests.
When answering questions about ongoing events, the model should use recent news articles to update its response.
When some facts are edited in the model, the updated facts should override all prior knowledge learned by the model.
arXiv Detail & Related papers (2024-07-17T23:16:11Z) - Documenting Ethical Considerations in Open Source AI Models [8.517777178514242]
This study investigates how developers document ethical aspects of open source AI models in practice.
After filtering an initial set of 2,347 documents, we identified 265 relevant ones.
Six themes emerge, with the three largest ones being model behavioural risks, model use cases, and model risk mitigation.
arXiv Detail & Related papers (2024-06-26T05:02:44Z) - Quantitative Assurance and Synthesis of Controllers from Activity
Diagrams [4.419843514606336]
Probabilistic model checking is a widely used formal verification technique to automatically verify qualitative and quantitative properties.
This makes it not accessible for researchers and engineers who may not have the required knowledge.
We propose a comprehensive verification framework for ADs, including a new profile for probability time, quality annotations, a semantics interpretation of ADs in three Markov models, and a set of transformation rules from activity diagrams to the PRISM language.
Most importantly, we developed algorithms for transformation and implemented them in a tool, called QASCAD, using model-based techniques, for fully automated verification.
arXiv Detail & Related papers (2024-02-29T22:40:39Z) - Use case cards: a use case reporting framework inspired by the European
AI Act [0.0]
We propose a new framework for the documentation of use cases, that we call "use case cards"
Unlike other documentation methodologies, we focus on the purpose and operational use of an AI system.
The proposed framework is the result of a co-design process involving a relevant team of EU policy experts and scientists.
arXiv Detail & Related papers (2023-06-23T15:47:19Z) - Towards Human-Interpretable Prototypes for Visual Assessment of Image
Classification Models [9.577509224534323]
We need models which are interpretable-by-design built on a reasoning process similar to humans.
ProtoPNet claims to discover visually meaningful prototypes in an unsupervised way.
We find that these prototypes still have a long way ahead towards definite explanations.
arXiv Detail & Related papers (2022-11-22T11:01:22Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - A Methodology for Creating AI FactSheets [67.65802440158753]
This paper describes a methodology for creating the form of AI documentation we call FactSheets.
Within each step of the methodology, we describe the issues to consider and the questions to explore.
This methodology will accelerate the broader adoption of transparent AI documentation.
arXiv Detail & Related papers (2020-06-24T15:08:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.