Computational Thought Experiments for a More Rigorous Philosophy and Science of the Mind
- URL: http://arxiv.org/abs/2405.08304v2
- Date: Wed, 15 May 2024 02:32:00 GMT
- Title: Computational Thought Experiments for a More Rigorous Philosophy and Science of the Mind
- Authors: Iris Oved, Nikhil Krishnaswamy, James Pustejovsky, Joshua Hartshorne,
- Abstract summary: We offer philosophical motivations for a method we call Virtual World Cognitive Science (VW CogSci)
Researchers use virtual embodied agents that are embedded in virtual worlds to explore questions in the field of Cognitive Science.
We focus on questions about mental and linguistic representation and the ways that such computational modeling can add rigor to philosophical thought experiments.
- Score: 7.101125921299772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We offer philosophical motivations for a method we call Virtual World Cognitive Science (VW CogSci), in which researchers use virtual embodied agents that are embedded in virtual worlds to explore questions in the field of Cognitive Science. We focus on questions about mental and linguistic representation and the ways that such computational modeling can add rigor to philosophical thought experiments, as well as the terminology used in the scientific study of such representations. We find that this method forces researchers to take a god's-eye view when describing dynamical relationships between entities in minds and entities in an environment in a way that eliminates the need for problematic talk of belief and concept types, such as the belief that cats are silly, and the concept CAT, while preserving belief and concept tokens in individual cognizers' minds. We conclude with some further key advantages of VW CogSci for the scientific study of mental and linguistic representation and for Cognitive Science more broadly.
Related papers
- LLM and Simulation as Bilevel Optimizers: A New Paradigm to Advance Physical Scientific Discovery [141.39722070734737]
We propose to enhance the knowledge-driven, abstract reasoning abilities of Large Language Models with the computational strength of simulations.
We introduce Scientific Generative Agent (SGA), a bilevel optimization framework.
We conduct experiments to demonstrate our framework's efficacy in law discovery and molecular design.
arXiv Detail & Related papers (2024-05-16T03:04:10Z) - Neuromorphic Correlates of Artificial Consciousness [1.4957306171002251]
The concept of neural correlates of consciousness (NCC) suggests that specific neural activities are linked to conscious experiences.
This paper explores the potential for artificial consciousness by merging neuromorphic design and architecture with brain simulations.
arXiv Detail & Related papers (2024-05-03T09:27:51Z) - A Brain-inspired Computational Model for Human-like Concept Learning [12.737696613208632]
The study develops a human-like computational model for concept learning based on spiking neural networks.
By effectively addressing the challenges posed by diverse sources and imbalanced dimensionality of the two forms of concept representations, the study successfully attains human-like concept representations.
arXiv Detail & Related papers (2024-01-12T09:32:51Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Intrinsic Physical Concepts Discovery with Object-Centric Predictive
Models [86.25460882547581]
We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision.
We show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks.
arXiv Detail & Related papers (2023-03-03T11:52:21Z) - Rejecting Cognitivism: Computational Phenomenology for Deep Learning [5.070542698701158]
We propose a non-representationalist framework for deep learning relying on a novel method: computational phenomenology.
We reject the modern cognitivist interpretation of deep learning, according to which artificial neural networks encode representations of external entities.
arXiv Detail & Related papers (2023-02-16T20:05:06Z) - Language Cognition and Language Computation -- Human and Machine
Language Understanding [51.56546543716759]
Language understanding is a key scientific issue in the fields of cognitive and computer science.
Can a combination of the disciplines offer new insights for building intelligent language models?
arXiv Detail & Related papers (2023-01-12T02:37:00Z) - To think inside the box, or to think out of the box? Scientific
discovery via the reciprocation of insights and concepts [26.218943558900552]
We view scientific discovery as an interplay between $thinking out of the box$ that actively seeks insightful solutions.
We propose Mindle, a semantic searching game that triggers scientific-discovery-like thinking spontaneously.
On this basis, the meta-strategies for insights and the usage of concepts can be investigated reciprocally.
arXiv Detail & Related papers (2022-12-01T03:52:12Z) - A Quantitative Symbolic Approach to Individual Human Reasoning [0.0]
We take findings from the literature and show how these, formalized as cognitive principles within a logical framework, can establish a quantitative notion of reasoning.
We employ techniques from non-monotonic reasoning and computer science, namely, a solving paradigm called answer set programming (ASP)
Finally, we can fruitfully use plausibility reasoning in ASP to test the effects of an existing experiment and explain different majority responses.
arXiv Detail & Related papers (2022-05-10T16:43:47Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.