Do humans and machines have the same eyes? Human-machine perceptual
differences on image classification
- URL: http://arxiv.org/abs/2304.08733v1
- Date: Tue, 18 Apr 2023 05:09:07 GMT
- Title: Do humans and machines have the same eyes? Human-machine perceptual
differences on image classification
- Authors: Minghao Liu, Jiaheng Wei, Yang Liu, James Davis
- Abstract summary: Trained computer vision models are assumed to solve vision tasks by imitating human behavior learned from training labels.
Our study first quantifies and analyzes the statistical distributions of mistakes from the two sources.
We empirically demonstrate a post-hoc human-machine collaboration that outperforms humans or machines alone.
- Score: 8.474744196892722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trained computer vision models are assumed to solve vision tasks by imitating
human behavior learned from training labels. Most efforts in recent vision
research focus on measuring the model task performance using standardized
benchmarks. Limited work has been done to understand the perceptual difference
between humans and machines. To fill this gap, our study first quantifies and
analyzes the statistical distributions of mistakes from the two sources. We
then explore human vs. machine expertise after ranking tasks by difficulty
levels. Even when humans and machines have similar overall accuracies, the
distribution of answers may vary. Leveraging the perceptual difference between
humans and machines, we empirically demonstrate a post-hoc human-machine
collaboration that outperforms humans or machines alone.
Related papers
- Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Vision-Based Manipulators Need to Also See from Their Hands [58.398637422321976]
We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations.
We find that a hand-centric (eye-in-hand) perspective affords reduced observability, but it consistently improves training efficiency and out-of-distribution generalization.
arXiv Detail & Related papers (2022-03-15T18:46:18Z) - Comparing Visual Reasoning in Humans and AI [66.89451296340809]
We created a dataset of complex scenes that contained human behaviors and social interactions.
We used a quantitative metric of similarity between scene descriptions of the AI/human and ground truth of five other human descriptions of each scene.
Results show that the machine/human agreement scene descriptions are much lower than human/human agreement for our complex scenes.
arXiv Detail & Related papers (2021-04-29T04:44:13Z) - Dissonance Between Human and Machine Understanding [16.32018730049208]
We present a large-scale crowdsourcing study that reveals and quantifies the dissonance between human and machine understanding.
Our findings have important implications on human-machine collaboration, considering that a long term goal in the field of artificial intelligence is to make machines capable of learning and reasoning like humans.
arXiv Detail & Related papers (2021-01-18T21:45:35Z) - Human vs. supervised machine learning: Who learns patterns faster? [0.0]
This study provides an answer to how learning performance differs between humans and machines when there is limited training data.
We have designed an experiment in which 44 humans and three different machine learning algorithms identify patterns in labeled training data and have to label instances according to the patterns they find.
arXiv Detail & Related papers (2020-11-30T13:39:26Z) - A robot that counts like a child: a developmental model of counting and
pointing [69.26619423111092]
A novel neuro-robotics model capable of counting real items is introduced.
The model allows us to investigate the interaction between embodiment and numerical cognition.
The trained model is able to count a set of items and at the same time points to them.
arXiv Detail & Related papers (2020-08-05T21:06:27Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z) - Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs [90.20235972293801]
Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
arXiv Detail & Related papers (2020-04-25T23:02:04Z) - Five Points to Check when Comparing Visual Perception in Humans and
Machines [26.761191892051]
A growing amount of work is directed towards comparing information processing in humans and machines.
Here, we propose ideas on how to design, conduct and interpret experiments.
We demonstrate and apply these ideas through three case studies.
arXiv Detail & Related papers (2020-04-20T16:05:36Z) - Robot self/other distinction: active inference meets neural networks
learning in a mirror [9.398766540452632]
We present an algorithm that enables a robot to perform non-appearance self-recognition on a mirror.
The algorithm combines active inference, a theoretical model of perception and action in the brain, with neural network learning.
Experimental results on a humanoid robot show the reliability of the algorithm for different initial conditions.
arXiv Detail & Related papers (2020-04-11T19:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.