Enacted Visual Perception: A Computational Model based on Piaget
Equilibrium
- URL: http://arxiv.org/abs/2102.00339v1
- Date: Sat, 30 Jan 2021 23:52:01 GMT
- Title: Enacted Visual Perception: A Computational Model based on Piaget
Equilibrium
- Authors: Aref Hakimzadeh, Yanbo Xue, and Peyman Setoodeh
- Abstract summary: We propose a computational model for the action involved in visual perception based on the notion of equilibrium as defined by Jean Piaget.
The proposed model is built around a modified version of convolutional neural networks (CNNs) with enhanced filter performance.
While the CNN plays the role of the visual system, the control signal is assumed to be a product of mind.
- Score: 1.7778609937758327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Maurice Merleau-Ponty's phenomenology of perception, analysis of
perception accounts for an element of intentionality, and in effect therefore,
perception and action cannot be viewed as distinct procedures. In the same line
of thinking, Alva No\"{e} considers perception as a thoughtful activity that
relies on capacities for action and thought. Here, by looking into psychology
as a source of inspiration, we propose a computational model for the action
involved in visual perception based on the notion of equilibrium as defined by
Jean Piaget. In such a model, Piaget's equilibrium reflects the mind's status,
which is used to control the observation process. The proposed model is built
around a modified version of convolutional neural networks (CNNs) with enhanced
filter performance, where characteristics of filters are adaptively adjusted
via a high-level control signal that accounts for the thoughtful activity in
perception. While the CNN plays the role of the visual system, the control
signal is assumed to be a product of mind.
Related papers
- Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Collective motion from quantum entanglement in visual perception [6.180313500709727]
We investigate the alignment of self-propelled agents by introducing quantum entanglement in the perceptual states of neighboring agents.
Our model demonstrates that, with an appropriate choice of the entangled state, the well-known Vicsek model of flocking behavior can be derived.
This approach provides fresh insights into swarm intelligence and multi-agent coordination, revealing how classical patterns of collective behavior emerge naturally from entangled perceptual states.
arXiv Detail & Related papers (2024-09-16T11:16:25Z) - The observer effect in quantum: the case of classification [0.0]
We show that sensory information becomes intricately entangled with observer states.
This framework lays the groundwork for a quantum-probability-based understanding of the observer effect.
arXiv Detail & Related papers (2024-06-12T15:23:53Z) - Binding Dynamics in Rotating Features [72.80071820194273]
We propose an alternative "cosine binding" mechanism, which explicitly computes the alignment between features and adjusts weights accordingly.
This allows us to draw direct connections to self-attention and biological neural processes, and to shed light on the fundamental dynamics for object-centric representations to emerge in Rotating Features.
arXiv Detail & Related papers (2024-02-08T12:31:08Z) - How does the primate brain combine generative and discriminative
computations in vision? [4.691670689443386]
Two contrasting conceptions of the inference process have each been influential in research on biological vision and machine vision.
We show that vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data.
We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
arXiv Detail & Related papers (2024-01-11T16:07:58Z) - Understanding Self-attention Mechanism via Dynamical System Perspective [58.024376086269015]
Self-attention mechanism (SAM) is widely used in various fields of artificial intelligence.
We show that intrinsic stiffness phenomenon (SP) in the high-precision solution of ordinary differential equations (ODEs) also widely exists in high-performance neural networks (NN)
We show that the SAM is also a stiffness-aware step size adaptor that can enhance the model's representational ability to measure intrinsic SP.
arXiv Detail & Related papers (2023-08-19T08:17:41Z) - Computing a human-like reaction time metric from stable recurrent vision
models [11.87006916768365]
We sketch a general-purpose methodology to construct computational accounts of reaction times from a stimulus-computable, task-optimized model.
We demonstrate that our metric aligns with patterns of human reaction times for stimulus manipulations across four disparate visual decision-making tasks.
This work paves the way for exploring the temporal alignment of model and human visual strategies in the context of various other cognitive tasks.
arXiv Detail & Related papers (2023-06-20T14:56:02Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - Adapting Brain-Like Neural Networks for Modeling Cortical Visual
Prostheses [68.96380145211093]
Cortical prostheses are devices implanted in the visual cortex that attempt to restore lost vision by electrically stimulating neurons.
Currently, the vision provided by these devices is limited, and accurately predicting the visual percepts resulting from stimulation is an open challenge.
We propose to address this challenge by utilizing 'brain-like' convolutional neural networks (CNNs), which have emerged as promising models of the visual system.
arXiv Detail & Related papers (2022-09-27T17:33:19Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - SparseBERT: Rethinking the Importance Analysis in Self-attention [107.68072039537311]
Transformer-based models are popular for natural language processing (NLP) tasks due to its powerful capacity.
Attention map visualization of a pre-trained model is one direct method for understanding self-attention mechanism.
We propose a Differentiable Attention Mask (DAM) algorithm, which can be also applied in guidance of SparseBERT design.
arXiv Detail & Related papers (2021-02-25T14:13:44Z) - Binding and Perspective Taking as Inference in a Generative Neural
Network Model [1.0323063834827415]
generative encoder-decoder architecture adapts its perspective and binds features by means of retrospective inference.
We show that the resulting gradient-based inference process solves the perspective taking and binding problem for known biological motion patterns.
arXiv Detail & Related papers (2020-12-09T16:43:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.