Visual Enumeration is Challenging for Large-scale Generative AI
- URL: http://arxiv.org/abs/2402.03328v2
- Date: Fri, 3 May 2024 15:24:20 GMT
- Title: Visual Enumeration is Challenging for Large-scale Generative AI
- Authors: Alberto Testolin, Kuinan Hou, Marco Zorzi,
- Abstract summary: Humans can readily judge the number of objects in a visual scene, even without counting.
We investigate whether large-scale generative Artificial Intelligence (AI) systems have a human-like number sense.
- Score: 0.08192907805418582
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans can readily judge the number of objects in a visual scene, even without counting, and such a skill has been documented in many animal species and babies prior to language development and formal schooling. Numerical judgments are error-free for small sets, while for larger collections responses become approximate, with variability increasing proportionally to the target number. This response pattern is observed for items of all kinds, despite variation in object features (such as color or shape), suggesting that our visual number sense relies on abstract representations of numerosity. Here, we investigate whether large-scale generative Artificial Intelligence (AI) systems have a human-like number sense, which should allow them to reliably name the number of objects in simple visual stimuli or generate images containing a target number of items in the 1-10 range. Surprisingly, most of the foundation models considered have a poor number sense: They make striking errors even with small numbers, the response variability does not increase in a systematic way, and the pattern of errors depends on object category. Only the most recent proprietary systems exhibit signatures of a visual number sense. Our findings demonstrate that having an intuitive visual understanding of number remains challenging for foundation models, which in turn might be detrimental to the perceptual grounding of numeracy that in humans is crucial for mathematical learning.
Related papers
- Help Me Identify: Is an LLM+VQA System All We Need to Identify Visual Concepts? [62.984473889987605]
We present a zero-shot framework for fine-grained visual concept learning by leveraging large language model and Visual Question Answering (VQA) system.
We pose these questions along with the query image to a VQA system and aggregate the answers to determine the presence or absence of an object in the test images.
Our experiments demonstrate comparable performance with existing zero-shot visual classification methods and few-shot concept learning approaches.
arXiv Detail & Related papers (2024-10-17T15:16:10Z) - Estimating the distribution of numerosity and non-numerical visual magnitudes in natural scenes using computer vision [0.08192907805418582]
We show that in natural visual scenes the frequency of appearance of different numerosities follows a power law distribution.
We show that the correlational structure for numerosity and continuous magnitudes is stable across datasets and scene types.
arXiv Detail & Related papers (2024-09-17T09:49:29Z) - Evaluating Numerical Reasoning in Text-to-Image Models [16.034479049513582]
We evaluate a range of text-to-image models on numerical reasoning tasks of varying difficulty.
We show that even the most advanced models have only rudimentary numerical skills.
We bundle prompts, generated images and human annotations into GeckoNum, a novel benchmark for evaluation of numerical reasoning.
arXiv Detail & Related papers (2024-06-20T22:56:31Z) - Quantity Matters: Towards Assessing and Mitigating Number Hallucination in Large Vision-Language Models [57.42800112251644]
We focus on a specific type of hallucination-number hallucination, referring to models incorrectly identifying the number of certain objects in pictures.
We devise a training approach aimed at improving consistency to reduce number hallucinations, which leads to an 8% enhancement in performance over direct finetuning methods.
arXiv Detail & Related papers (2024-03-03T02:31:11Z) - WinoViz: Probing Visual Properties of Objects Under Different States [39.92628807477848]
We present a text-only evaluation dataset consisting of 1,380 examples that probe the reasoning abilities of language models regarding variant visual properties of objects under different contexts or states.
Our task is challenging since it requires pragmatic reasoning (finding intended meanings) and visual knowledge reasoning.
We also present multi-hop data, a more challenging version of our data, which requires multi-step reasoning chains to solve our task.
arXiv Detail & Related papers (2024-02-21T07:31:47Z) - Bongard-HOI: Benchmarking Few-Shot Visual Reasoning for Human-Object
Interactions [138.49522643425334]
Bongard-HOI is a new visual reasoning benchmark that focuses on compositional learning of human-object interactions from natural images.
It is inspired by two desirable characteristics from the classical Bongard problems (BPs): 1) few-shot concept learning, and 2) context-dependent reasoning.
Bongard-HOI presents a substantial challenge to today's visual recognition models.
arXiv Detail & Related papers (2022-05-27T07:36:29Z) - Object-Centric Diagnosis of Visual Reasoning [118.36750454795428]
This paper presents a systematical object-centric diagnosis of visual reasoning on grounding and robustness.
We develop a diagnostic model, namely Graph Reasoning Machine.
Our model replaces purely symbolic visual representation with probabilistic scene graph and then applies teacher-forcing training for the visual reasoning module.
arXiv Detail & Related papers (2020-12-21T18:59:28Z) - A Number Sense as an Emergent Property of the Manipulating Brain [16.186932790845937]
We study the mechanism through which humans acquire and develop the ability to manipulate numbers and quantities.
Our model acquires the ability to estimate numerosity, i.e. the number of objects in the scene.
We conclude that important aspects of a facility with numbers and quantities may be learned with supervision from a simple pre-training task.
arXiv Detail & Related papers (2020-12-08T00:37:35Z) - A robot that counts like a child: a developmental model of counting and
pointing [69.26619423111092]
A novel neuro-robotics model capable of counting real items is introduced.
The model allows us to investigate the interaction between embodiment and numerical cognition.
The trained model is able to count a set of items and at the same time points to them.
arXiv Detail & Related papers (2020-08-05T21:06:27Z) - Machine Number Sense: A Dataset of Visual Arithmetic Problems for
Abstract and Relational Reasoning [95.18337034090648]
We propose a dataset, Machine Number Sense (MNS), consisting of visual arithmetic problems automatically generated using a grammar model--And-Or Graph (AOG)
These visual arithmetic problems are in the form of geometric figures.
We benchmark the MNS dataset using four predominant neural network models as baselines in this visual reasoning task.
arXiv Detail & Related papers (2020-04-25T17:14:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.