Five Points to Check when Comparing Visual Perception in Humans and
Machines
- URL: http://arxiv.org/abs/2004.09406v3
- Date: Tue, 13 Apr 2021 16:03:20 GMT
- Title: Five Points to Check when Comparing Visual Perception in Humans and
Machines
- Authors: Christina M. Funke, Judy Borowski, Karolina Stosio, Wieland Brendel,
Thomas S. A. Wallis, Matthias Bethge
- Abstract summary: A growing amount of work is directed towards comparing information processing in humans and machines.
Here, we propose ideas on how to design, conduct and interpret experiments.
We demonstrate and apply these ideas through three case studies.
- Score: 26.761191892051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rise of machines to human-level performance in complex recognition
tasks, a growing amount of work is directed towards comparing information
processing in humans and machines. These studies are an exciting chance to
learn about one system by studying the other. Here, we propose ideas on how to
design, conduct and interpret experiments such that they adequately support the
investigation of mechanisms when comparing human and machine perception. We
demonstrate and apply these ideas through three case studies. The first case
study shows how human bias can affect how we interpret results, and that
several analytic tools can help to overcome this human reference point. In the
second case study, we highlight the difference between necessary and sufficient
mechanisms in visual reasoning tasks. Thereby, we show that contrary to
previous suggestions, feedback mechanisms might not be necessary for the tasks
in question. The third case study highlights the importance of aligning
experimental conditions. We find that a previously-observed difference in
object recognition does not hold when adapting the experiment to make
conditions more equitable between humans and machines. In presenting a
checklist for comparative studies of visual reasoning in humans and machines,
we hope to highlight how to overcome potential pitfalls in design or inference.
Related papers
- Learning to Assist Humans without Inferring Rewards [65.28156318196397]
We build upon prior work that studies assistance through the lens of empowerment.
An assistive agent aims to maximize the influence of the human's actions.
We prove that these representations estimate a similar notion of empowerment to that studied by prior work.
arXiv Detail & Related papers (2024-11-04T21:31:04Z) - Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals [82.68757839524677]
Interpretability research aims to bridge the gap between empirical success and our scientific understanding of large language models (LLMs)
We propose a formulation of competition of mechanisms, which focuses on the interplay of multiple mechanisms instead of individual mechanisms.
Our findings show traces of the mechanisms and their competition across various model components and reveal attention positions that effectively control the strength of certain mechanisms.
arXiv Detail & Related papers (2024-02-18T17:26:51Z) - How does the primate brain combine generative and discriminative
computations in vision? [4.691670689443386]
Two contrasting conceptions of the inference process have each been influential in research on biological vision and machine vision.
We show that vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data.
We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
arXiv Detail & Related papers (2024-01-11T16:07:58Z) - Do humans and machines have the same eyes? Human-machine perceptual
differences on image classification [8.474744196892722]
Trained computer vision models are assumed to solve vision tasks by imitating human behavior learned from training labels.
Our study first quantifies and analyzes the statistical distributions of mistakes from the two sources.
We empirically demonstrate a post-hoc human-machine collaboration that outperforms humans or machines alone.
arXiv Detail & Related papers (2023-04-18T05:09:07Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - A-ACT: Action Anticipation through Cycle Transformations [89.83027919085289]
We take a step back to analyze how the human capability to anticipate the future can be transferred to machine learning algorithms.
A recent study on human psychology explains that, in anticipating an occurrence, the human brain counts on both systems.
In this work, we study the impact of each system for the task of action anticipation and introduce a paradigm to integrate them in a learning framework.
arXiv Detail & Related papers (2022-04-02T21:50:45Z) - Vision-Based Manipulators Need to Also See from Their Hands [58.398637422321976]
We study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations.
We find that a hand-centric (eye-in-hand) perspective affords reduced observability, but it consistently improves training efficiency and out-of-distribution generalization.
arXiv Detail & Related papers (2022-03-15T18:46:18Z) - Machine Explanations and Human Understanding [31.047297225560566]
Explanations are hypothesized to improve human understanding of machine learning models.
empirical studies have found mixed and even negative results.
We show how human intuitions play a central role in enabling human understanding.
arXiv Detail & Related papers (2022-02-08T19:00:38Z) - Empirical Estimates on Hand Manipulation are Recoverable: A Step Towards
Individualized and Explainable Robotic Support in Everyday Activities [80.37857025201036]
Key challenge for robotic systems is to figure out the behavior of another agent.
Processing correct inferences is especially challenging when (confounding) factors are not controlled experimentally.
We propose equipping robots with the necessary tools to conduct observational studies on people.
arXiv Detail & Related papers (2022-01-27T22:15:56Z) - Beneficial and Harmful Explanatory Machine Learning [5.223556562214077]
This paper investigates the explanatory effects of a machine learned theory in the context of simple two person games.
It proposes a framework for identifying the harmfulness of machine explanations based on the Cognitive Science literature.
arXiv Detail & Related papers (2020-09-09T19:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.