Do People Engage Cognitively with AI? Impact of AI Assistance on
Incidental Learning
- URL: http://arxiv.org/abs/2202.05402v1
- Date: Fri, 11 Feb 2022 01:28:59 GMT
- Title: Do People Engage Cognitively with AI? Impact of AI Assistance on
Incidental Learning
- Authors: Krzysztof Z. Gajos and Lena Mamykina
- Abstract summary: When people receive advice while making difficult decisions, they often make better decisions in the moment and also increase their knowledge in the process.
How do people process the information and advice they receive from AI, and do they engage with it deeply enough to enable learning?
This work provides some of the most direct evidence to date that it may not be sufficient to include explanations together with AI-generated recommendation.
- Score: 19.324012098032515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When people receive advice while making difficult decisions, they often make
better decisions in the moment and also increase their knowledge in the
process. However, such incidental learning can only occur when people
cognitively engage with the information they receive and process this
information thoughtfully. How do people process the information and advice they
receive from AI, and do they engage with it deeply enough to enable learning?
To answer these questions, we conducted three experiments in which individuals
were asked to make nutritional decisions and received simulated AI
recommendations and explanations. In the first experiment, we found that when
people were presented with both a recommendation and an explanation before
making their choice, they made better decisions than they did when they
received no such help, but they did not learn. In the second experiment,
participants first made their own choice, and only then saw a recommendation
and an explanation from AI; this condition also resulted in improved decisions,
but no learning. However, in our third experiment, participants were presented
with just an AI explanation but no recommendation and had to arrive at their
own decision. This condition led to both more accurate decisions and learning
gains. We hypothesize that learning gains in this condition were due to deeper
engagement with explanations needed to arrive at the decisions. This work
provides some of the most direct evidence to date that it may not be sufficient
to include explanations together with AI-generated recommendation to ensure
that people engage carefully with the AI-provided information. This work also
presents one technique that enables incidental learning and, by implication,
can help people process AI recommendations and explanations more carefully.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Training Towards Critical Use: Learning to Situate AI Predictions
Relative to Human Knowledge [22.21959942886099]
We introduce a process-oriented notion of appropriate reliance called critical use that centers the human's ability to situate AI predictions against knowledge that is uniquely available to them but unavailable to the AI model.
We conduct a randomized online experiment in a complex social decision-making setting: child maltreatment screening.
We find that, by providing participants with accelerated, low-stakes opportunities to practice AI-assisted decision-making, novices came to exhibit patterns of disagreement with AI that resemble those of experienced workers.
arXiv Detail & Related papers (2023-08-30T01:54:31Z) - Understanding the Role of Human Intuition on Reliance in Human-AI
Decision-Making with Explanations [44.01143305912054]
We study how decision-makers' intuition affects their use of AI predictions and explanations.
Our results identify three types of intuition involved in reasoning about AI predictions and explanations.
We use these pathways to explain why feature-based explanations did not improve participants' decision outcomes and increased their overreliance on AI.
arXiv Detail & Related papers (2023-01-18T01:33:50Z) - The Role of Heuristics and Biases During Complex Choices with an AI
Teammate [0.0]
We argue that classic experimental methods are insufficient for studying complex choices made with AI helpers.
We show that framing and anchoring effects impact how people work with an AI helper and are predictive of choice outcomes.
arXiv Detail & Related papers (2023-01-14T20:06:43Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - To Trust or to Think: Cognitive Forcing Functions Can Reduce
Overreliance on AI in AI-assisted Decision-making [4.877174544937129]
People supported by AI-powered decision support tools frequently overrely on the AI.
Adding explanations to the AI decisions does not appear to reduce the overreliance.
Our research suggests that human cognitive motivation moderates the effectiveness of explainable AI solutions.
arXiv Detail & Related papers (2021-02-19T00:38:53Z) - Explainable AI and Adoption of Algorithmic Advisors: an Experimental
Study [0.6875312133832077]
We develop an experimental methodology where participants play a web-based game, during which they receive advice from either a human or an algorithmic advisor.
We evaluate whether the different types of explanations affect the readiness to adopt, willingness to pay and trust a financial AI consultant.
We find that the types of explanations that promote adoption during first encounter differ from those that are most successful following failure or when cost is involved.
arXiv Detail & Related papers (2021-01-05T09:34:38Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.