Engineering Sentience
- URL: http://arxiv.org/abs/2506.20504v1
- Date: Wed, 25 Jun 2025 14:49:50 GMT
- Title: Engineering Sentience
- Authors: Konstantin Demin, Taylor Webb, Eric Elmoznino, Hakwan Lau,
- Abstract summary: We spell out a definition of sentience that may be useful for designing and building it in machines.<n>For sentience to be meaningful for AI, it must be fleshed out in functional, computational terms.
- Score: 0.9999629695552195
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We spell out a definition of sentience that may be useful for designing and building it in machines. We propose that for sentience to be meaningful for AI, it must be fleshed out in functional, computational terms, in enough detail to allow for implementation. Yet, this notion of sentience must also reflect something essentially 'subjective', beyond just having the general capacity to encode perceptual content. For this specific functional notion of sentience to occur, we propose that certain sensory signals need to be both assertoric (persistent) and qualitative. To illustrate the definition in more concrete terms, we sketch out some ways for potential implementation, given current technology. Understanding what it takes for artificial agents to be functionally sentient can also help us avoid creating them inadvertently, or at least, realize that we have created them in a timely manner.
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.<n>Here, we propose a definition, which we call representational compositionality, that accounts for and extends our intuitions about compositionality.<n>We show how it unifies disparate intuitions from across the literature in both AI and cognitive science.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - Help Me Identify: Is an LLM+VQA System All We Need to Identify Visual Concepts? [62.984473889987605]
We present a zero-shot framework for fine-grained visual concept learning by leveraging large language model and Visual Question Answering (VQA) system.
We pose these questions along with the query image to a VQA system and aggregate the answers to determine the presence or absence of an object in the test images.
Our experiments demonstrate comparable performance with existing zero-shot visual classification methods and few-shot concept learning approaches.
arXiv Detail & Related papers (2024-10-17T15:16:10Z) - An Essay concerning machine understanding [0.0]
This essay describes how we could go about constructing a machine capable of understanding.
To understand a word is to know and be able to work with the underlying concepts for which it is an indicator.
arXiv Detail & Related papers (2024-05-03T04:12:43Z) - Preliminaries to artificial consciousness: a multidimensional heuristic approach [0.0]
The pursuit of artificial consciousness requires conceptual clarity to navigate its theoretical and empirical challenges.<n>This paper introduces a composite, multilevel, and multidimensional model of consciousness as a framework to guide research in this field.
arXiv Detail & Related papers (2024-03-29T13:47:47Z) - Aligning Robot and Human Representations [50.070982136315784]
We argue that current representation learning approaches in robotics should be studied from the perspective of how well they accomplish the objective of representation alignment.
We mathematically define the problem, identify its key desiderata, and situate current methods within this formalism.
arXiv Detail & Related papers (2023-02-03T18:59:55Z) - What does it mean to represent? Mental representations as falsifiable
memory patterns [8.430851504111585]
We argue that causal and teleological approaches fail to provide a satisfactory account of representation.
We sketch an alternative according to which representations correspond to inferred latent structures in the world.
These structures are assumed to have certain properties objectively, which allows for planning, prediction, and detection of unexpected events.
arXiv Detail & Related papers (2022-03-06T12:52:42Z) - Existence and perception as the basis of AGI (Artificial General
Intelligence) [0.0]
AGI, unlike AI, should operate with meanings. And that's what distinguishes it from AI.
For AGI, which emulates human thinking, this ability is crucial.
Numerous attempts to define the concept of "meaning" have one very significant drawback - all such definitions are not strict and formalized, so they cannot be programmed.
arXiv Detail & Related papers (2022-01-30T14:06:43Z) - An argument for the impossibility of machine intelligence [0.0]
We define what it is to be an agent (device) that could be the bearer of AI.
We show that the mainstream definitions of intelligence' are too weak even to capture what is involved when we ascribe intelligence to an insect.
We identify the properties that an AI agent would need to possess in order to be the bearer of intelligence by this definition.
arXiv Detail & Related papers (2021-10-20T08:54:48Z) - Philosophical Specification of Empathetic Ethical Artificial
Intelligence [0.0]
An ethical AI must be capable of inferring unspoken rules, interpreting nuance and context, and infer intent.
We use enactivism, semiotics, perceptual symbol systems and symbol emergence to specify an agent.
It has malleable intent because the meaning of symbols changes as it learns, and its intent is represented symbolically as a goal.
arXiv Detail & Related papers (2021-07-22T14:37:46Z) - Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and
Execution [97.50813120600026]
Spatial-temporal reasoning is a challenging task in Artificial Intelligence (AI)
Recent works have focused on an abstract reasoning task of this kind -- Raven's Progressive Matrices ( RPM)
We propose a neuro-symbolic Probabilistic Abduction and Execution learner (PrAE) learner.
arXiv Detail & Related papers (2021-03-26T02:42:18Z) - Inductive Biases for Deep Learning of Higher-Level Cognition [108.89281493851358]
A fascinating hypothesis is that human and animal intelligence could be explained by a few principles.
This work considers a larger list, focusing on those which concern mostly higher-level and sequential conscious processing.
The objective of clarifying these particular principles is that they could potentially help us build AI systems benefiting from humans' abilities.
arXiv Detail & Related papers (2020-11-30T18:29:25Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.