Explaining decisions made with AI: A workbook (Use case 1: AI-assisted
recruitment tool)
- URL: http://arxiv.org/abs/2104.03906v1
- Date: Sat, 20 Mar 2021 17:03:50 GMT
- Title: Explaining decisions made with AI: A workbook (Use case 1: AI-assisted
recruitment tool)
- Authors: David Leslie and Morgan Briggs
- Abstract summary: The Alan Turing Institute and the Information Commissioner's Office have been working together to tackle the difficult issues surrounding explainable AI.
The ultimate product of this joint endeavour, Explaining decisions made with AI, published in May 2020, is the most comprehensive practical guidance on AI explanation produced anywhere to date.
The goal of the workbook is to summarise some of main themes from Explaining decisions made with AI and then to provide the materials for a workshop exercise.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the last two years, The Alan Turing Institute and the Information
Commissioner's Office (ICO) have been working together to discover ways to
tackle the difficult issues surrounding explainable AI. The ultimate product of
this joint endeavour, Explaining decisions made with AI, published in May 2020,
is the most comprehensive practical guidance on AI explanation produced
anywhere to date. We have put together this workbook to help support the uptake
of that guidance. The goal of the workbook is to summarise some of main themes
from Explaining decisions made with AI and then to provide the materials for a
workshop exercise that has been built around a use case created to help you
gain a flavour of how to put the guidance into practice. In the first three
sections, we run through the basics of Explaining decisions made with AI. We
provide a precis of the four principles of AI explainability, the typology of
AI explanations, and the tasks involved in the explanation-aware design,
development, and use of AI/ML systems. We then provide some reflection
questions, which are intended to be a launching pad for group discussion, and a
starting point for the case-study-based exercise that we have included as
Appendix B. In Appendix A, we go into more detailed suggestions about how to
organise the workshop. These recommendations are based on two workshops we had
the privilege of co-hosting with our colleagues from the ICO and Manchester
Metropolitan University in January 2021. The participants of these workshops
came from both the private and public sectors, and we are extremely grateful to
them for their energy, enthusiasm, and tremendous insight. This workbook would
simply not exist without the commitment and keenness of all our collaborators
and workshop participants.
Related papers
- The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - Training Towards Critical Use: Learning to Situate AI Predictions
Relative to Human Knowledge [22.21959942886099]
We introduce a process-oriented notion of appropriate reliance called critical use that centers the human's ability to situate AI predictions against knowledge that is uniquely available to them but unavailable to the AI model.
We conduct a randomized online experiment in a complex social decision-making setting: child maltreatment screening.
We find that, by providing participants with accelerated, low-stakes opportunities to practice AI-assisted decision-making, novices came to exhibit patterns of disagreement with AI that resemble those of experienced workers.
arXiv Detail & Related papers (2023-08-30T01:54:31Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Toward Ethical AIED [0.0]
This paper presents the key conclusions to the forthcoming edited book on The Ethics of Artificial Intelligence in Education: Practices, Challenges and Debates (August 2022, Routlege)
As well as highlighting the key contributions to the book, it discusses the key questions and the grand challenges for the field of AI in Education (AIED)
The book itself presents diverse perspectives from outside and from within the AIED as a way of achieving a broad perspective in the key ethical issues for AIED.
arXiv Detail & Related papers (2022-03-11T13:26:54Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - Artificial Intelligence Ethics and Safety: practical tools for creating
"good" models [0.0]
The AI Robotics Ethics Society (AIRES) is a non-profit organization founded in 2018 by Aaron Hui.
AIRES is now an organization with chapters at universities such as UCLA (Los Angeles), USC (University of Southern California), Caltech (California Institute of Technology), Stanford University, Cornell University, Brown University, and the Pontifical Catholic University of Rio Grande do Sul (Brazil)
arXiv Detail & Related papers (2021-12-14T19:10:05Z) - Demystifying Ten Big Ideas and Rules Every Fire Scientist & Engineer
Should Know About Blackbox, Whitebox & Causal Artificial Intelligence [2.4366811507669124]
This letter is a companion to the Smart Systems in Fire Engineering special issue sponsored by Fire Technology.
The first section outlines big ideas pertaining to AI, and answers some of the burning questions with regard to the merit of adopting AI in our domain.
The second section presents a set of rules or technical recommendations an AI user may deem helpful to practice whenever AI is used as an investigation methodology.
arXiv Detail & Related papers (2021-11-23T17:46:28Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Becoming Good at AI for Good [21.58081555662445]
We detail the different aspects of this type of collaboration broken down into four high-level categories.
We briefly describe two case studies to illustrate how some of these takeaways were applied in practice.
arXiv Detail & Related papers (2021-04-23T18:00:21Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.