Dissonance Between Human and Machine Understanding
- URL: http://arxiv.org/abs/2101.07337v1
- Date: Mon, 18 Jan 2021 21:45:35 GMT
- Title: Dissonance Between Human and Machine Understanding
- Authors: Zijian Zhang, Jaspreet Singh, Ujwal Gadiraju, Avishek Anand
- Abstract summary: We present a large-scale crowdsourcing study that reveals and quantifies the dissonance between human and machine understanding.
Our findings have important implications on human-machine collaboration, considering that a long term goal in the field of artificial intelligence is to make machines capable of learning and reasoning like humans.
- Score: 16.32018730049208
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Complex machine learning models are deployed in several critical domains
including healthcare and autonomous vehicles nowadays, albeit as functional
black boxes. Consequently, there has been a recent surge in interpreting
decisions of such complex models in order to explain their actions to humans.
Models that correspond to human interpretation of a task are more desirable in
certain contexts and can help attribute liability, build trust, expose biases
and in turn build better models. It is, therefore, crucial to understand how
and which models conform to human understanding of tasks. In this paper, we
present a large-scale crowdsourcing study that reveals and quantifies the
dissonance between human and machine understanding, through the lens of an
image classification task. In particular, we seek to answer the following
questions: Which (well-performing) complex ML models are closer to humans in
their use of features to make accurate predictions? How does task difficulty
affect the feature selection capability of machines in comparison to humans?
Are humans consistently better at selecting features that make image
recognition more accurate? Our findings have important implications on
human-machine collaboration, considering that a long term goal in the field of
artificial intelligence is to make machines capable of learning and reasoning
like humans.
Related papers
- Human-Modeling in Sequential Decision-Making: An Analysis through the Lens of Human-Aware AI [20.21053807133341]
We try to provide an account of what constitutes a human-aware AI system.
We see that human-aware AI is a design oriented paradigm, one that focuses on the need for modeling the humans it may interact with.
arXiv Detail & Related papers (2024-05-13T14:17:52Z) - PhyGrasp: Generalizing Robotic Grasping with Physics-informed Large
Multimodal Models [58.33913881592706]
Humans can easily apply their intuitive physics to grasp skillfully and change grasps efficiently, even for objects they have never seen before.
This work delves into infusing such physical commonsense reasoning into robotic manipulation.
We introduce PhyGrasp, a multimodal large model that leverages inputs from two modalities: natural language and 3D point clouds.
arXiv Detail & Related papers (2024-02-26T18:57:52Z) - The Generative AI Paradox: "What It Can Create, It May Not Understand" [81.89252713236746]
Recent wave of generative AI has sparked excitement and concern over potentially superhuman levels of artificial intelligence.
At the same time, models still show basic errors in understanding that would not be expected even in non-expert humans.
This presents us with an apparent paradox: how do we reconcile seemingly superhuman capabilities with the persistence of errors that few humans would make?
arXiv Detail & Related papers (2023-10-31T18:07:07Z) - Do humans and machines have the same eyes? Human-machine perceptual
differences on image classification [8.474744196892722]
Trained computer vision models are assumed to solve vision tasks by imitating human behavior learned from training labels.
Our study first quantifies and analyzes the statistical distributions of mistakes from the two sources.
We empirically demonstrate a post-hoc human-machine collaboration that outperforms humans or machines alone.
arXiv Detail & Related papers (2023-04-18T05:09:07Z) - JECC: Commonsense Reasoning Tasks Derived from Interactive Fictions [75.42526766746515]
We propose a new commonsense reasoning dataset based on human's Interactive Fiction (IF) gameplay walkthroughs.
Our dataset focuses on the assessment of functional commonsense knowledge rules rather than factual knowledge.
Experiments show that the introduced dataset is challenging to previous machine reading models as well as the new large language models.
arXiv Detail & Related papers (2022-10-18T19:20:53Z) - Reasoning about Counterfactuals to Improve Human Inverse Reinforcement
Learning [5.072077366588174]
Humans naturally infer other agents' beliefs and desires by reasoning about their observable behavior.
We propose to incorporate the learner's current understanding of the robot's decision making into our model of human IRL.
We also propose a novel measure for estimating the difficulty for a human to predict instances of a robot's behavior in unseen environments.
arXiv Detail & Related papers (2022-03-03T17:06:37Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.