A Brain-inspired Computational Model for Human-like Concept Learning
- URL: http://arxiv.org/abs/2401.06471v1
- Date: Fri, 12 Jan 2024 09:32:51 GMT
- Title: A Brain-inspired Computational Model for Human-like Concept Learning
- Authors: Yuwei Wang and Yi Zeng
- Abstract summary: The study develops a human-like computational model for concept learning based on spiking neural networks.
By effectively addressing the challenges posed by diverse sources and imbalanced dimensionality of the two forms of concept representations, the study successfully attains human-like concept representations.
- Score: 12.737696613208632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Concept learning is a fundamental aspect of human cognition and plays a
critical role in mental processes such as categorization, reasoning, memory,
and decision-making. Researchers across various disciplines have shown
consistent interest in the process of concept acquisition in individuals. To
elucidate the mechanisms involved in human concept learning, this study
examines the findings from computational neuroscience and cognitive psychology.
These findings indicate that the brain's representation of concepts relies on
two essential components: multisensory representation and text-derived
representation. These two types of representations are coordinated by a
semantic control system, ultimately leading to the acquisition of concepts.
Drawing inspiration from this mechanism, the study develops a human-like
computational model for concept learning based on spiking neural networks. By
effectively addressing the challenges posed by diverse sources and imbalanced
dimensionality of the two forms of concept representations, the study
successfully attains human-like concept representations. Tests involving
similar concepts demonstrate that our model, which mimics the way humans learn
concepts, yields representations that closely align with human cognition.
Related papers
- Vector-based Representation is the Key: A Study on Disentanglement and
Compositional Generalization [77.57425909520167]
We show that it is possible to achieve both good concept recognition and novel concept composition.
We propose a method to reform the scalar-based disentanglement works to be vector-based to increase both capabilities.
arXiv Detail & Related papers (2023-05-29T13:05:15Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - Rejecting Cognitivism: Computational Phenomenology for Deep Learning [5.070542698701158]
We propose a non-representationalist framework for deep learning relying on a novel method: computational phenomenology.
We reject the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities.
arXiv Detail & Related papers (2023-02-16T20:05:06Z) - Formal Conceptual Views in Neural Networks [0.0]
We introduce two notions for conceptual views of a neural network, specifically a many-valued and a symbolic view.
We test the conceptual expressivity of our novel views through different experiments on the ImageNet and Fruit-360 data sets.
We demonstrate how conceptual views can be applied for abductive learning of human comprehensible rules from neurons.
arXiv Detail & Related papers (2022-09-27T16:38:24Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Cognitive science as a source of forward and inverse models of human
decisions for robotics and control [13.502912109138249]
We look at how cognitive science can provide forward models of human decision-making.
We highlight approaches that synthesize blackbox and theory-driven modeling.
We aim to provide readers with a glimpse of the range of frameworks, methodologies, and actionable insights that lie at the intersection of cognitive science and control research.
arXiv Detail & Related papers (2021-09-01T00:28:28Z) - Cause and Effect: Concept-based Explanation of Neural Networks [3.883460584034766]
We take a step in the interpretability of neural networks by examining their internal representation or neuron's activations against concepts.
We propose a framework to check the existence of a causal relationship between a concept (or its negation) and task classes.
arXiv Detail & Related papers (2021-05-14T18:54:17Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z) - The Evolution of Concept-Acquisition based on Developmental Psychology [4.416484585765028]
A conceptual system with rich connotation is key to improving the performance of knowledge-based artificial intelligence systems.
Finding a new method to represent concepts and construct a conceptual system will greatly improve the performance of many intelligent systems.
Developmental psychology carefully observes the process of concept acquisition in humans at the behavioral level.
arXiv Detail & Related papers (2020-11-26T01:57:24Z) - Bongard-LOGO: A New Benchmark for Human-Level Concept Learning and
Reasoning [78.13740873213223]
Bongard problems (BPs) were introduced as an inspirational challenge for visual cognition in intelligent systems.
We propose a new benchmark Bongard-LOGO for human-level concept learning and reasoning.
arXiv Detail & Related papers (2020-10-02T03:19:46Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.