Qualitative Investigation in Explainable Artificial Intelligence: A Bit
More Insight from Social Science
- URL: http://arxiv.org/abs/2011.07130v2
- Date: Fri, 18 Dec 2020 22:22:13 GMT
- Title: Qualitative Investigation in Explainable Artificial Intelligence: A Bit
More Insight from Social Science
- Authors: Adam J. Johs, Denise E. Agosto, Rosina O. Weber
- Abstract summary: We present a focused analysis of user studies in explainable artificial intelligence (XAI)
We draw on social science corpora to suggest ways for improving the rigor of studies where XAI researchers use observations, interviews, focus groups, and/or questionnaires to capture qualitative data.
The results of our analysis support calls from others in the XAI community advocating for collaboration with experts from social disciplines to bolster rigor and effectiveness in user studies.
- Score: 0.5801044612920815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a focused analysis of user studies in explainable artificial
intelligence (XAI) entailing qualitative investigation. We draw on social
science corpora to suggest ways for improving the rigor of studies where XAI
researchers use observations, interviews, focus groups, and/or questionnaires
to capture qualitative data. We contextualize the presentation of the XAI
papers included in our analysis according to the components of rigor described
in the qualitative research literature: 1) underlying theories or frameworks,
2) methodological approaches, 3) data collection methods, and 4) data analysis
processes. The results of our analysis support calls from others in the XAI
community advocating for collaboration with experts from social disciplines to
bolster rigor and effectiveness in user studies.
Related papers
- User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study [5.775094401949666]
This study is located in the Human-Centered Artificial Intelligence (HCAI)
It focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms.
arXiv Detail & Related papers (2024-10-21T12:32:39Z) - BLADE: Benchmarking Language Model Agents for Data-Driven Science [18.577658530714505]
LM-based agents equipped with planning, memory, and code execution capabilities have the potential to support data-driven science.
We present BLADE, a benchmark to automatically evaluate agents' multifaceted approaches to open-ended research questions.
arXiv Detail & Related papers (2024-08-19T02:59:35Z) - Generative AI Tools in Academic Research: Applications and Implications for Qualitative and Quantitative Research Methodologies [0.0]
This study examines the impact of Generative Artificial Intelligence (GenAI) on academic research, focusing on its application to qualitative and quantitative data analysis.
GenAI tools evolve rapidly, they offer new possibilities for enhancing research productivity and democratising complex analytical processes.
Their integration into academic practice raises significant questions regarding research integrity and security, authorship, and the changing nature of scholarly work.
arXiv Detail & Related papers (2024-08-13T13:10:03Z) - Harnessing AI for efficient analysis of complex policy documents: a case study of Executive Order 14110 [44.99833362998488]
Policy documents, such as legislation, regulations, and executive orders, are crucial in shaping society.
This study aims to evaluate the potential of AI in streamlining policy analysis and to identify the strengths and limitations of current AI approaches.
arXiv Detail & Related papers (2024-06-10T11:19:28Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - DeSIQ: Towards an Unbiased, Challenging Benchmark for Social
Intelligence Understanding [60.84356161106069]
We study the soundness of Social-IQ, a dataset of multiple-choice questions on videos of complex social interactions.
Our analysis reveals that Social-IQ contains substantial biases, which can be exploited by a moderately strong language model.
We introduce DeSIQ, a new challenging dataset, constructed by applying simple perturbations to Social-IQ.
arXiv Detail & Related papers (2023-10-24T06:21:34Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Towards Human-centered Explainable AI: A Survey of User Studies for Model Explanations [18.971689499890363]
We identify and analyze 97core papers with human-based XAI evaluations over the past five years.
Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems.
We propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners.
arXiv Detail & Related papers (2022-10-20T20:53:00Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - MAIR: Framework for mining relationships between research articles,
strategies, and regulations in the field of explainable artificial
intelligence [2.280298858971133]
It is essential to understand the dynamics of the impact of regulation on research papers and AI-related policies.
This paper introduces a novel framework for joint analysis of AI-related policy documents and XAI research papers.
arXiv Detail & Related papers (2021-07-29T20:41:17Z) - AR-LSAT: Investigating Analytical Reasoning of Text [57.1542673852013]
We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
arXiv Detail & Related papers (2021-04-14T02:53:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.