Subverting machines, fluctuating identities: Re-learning human
categorization
- URL: http://arxiv.org/abs/2205.13740v1
- Date: Fri, 27 May 2022 03:09:25 GMT
- Title: Subverting machines, fluctuating identities: Re-learning human
categorization
- Authors: Christina Lu, Jackie Kay, Kevin R. McKee
- Abstract summary: Default paradigm in AI research envisions identity with essential attributes that are discrete and static.
In stark contrast, strands of thought within critical theory present a conception of identity as malleable and constructed entirely through interaction.
- Score: 1.3106063755117399
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Most machine learning systems that interact with humans construct some notion
of a person's "identity," yet the default paradigm in AI research envisions
identity with essential attributes that are discrete and static. In stark
contrast, strands of thought within critical theory present a conception of
identity as malleable and constructed entirely through interaction; a doing
rather than a being. In this work, we distill some of these ideas for machine
learning practitioners and introduce a theory of identity as autopoiesis,
circular processes of formation and function. We argue that the default
paradigm of identity used by the field immobilizes existing identity categories
and the power differentials that co$\unicode{x2010}$occur, due to the absence
of iterative feedback to our models. This includes a critique of emergent AI
fairness practices that continue to impose the default paradigm. Finally, we
apply our theory to sketch approaches to autopoietic identity through
multilevel optimization and relational learning. While these ideas raise many
open questions, we imagine the possibilities of machines that are capable of
expressing human identity as a relationship perpetually in flux.
Related papers
- Generative midtended cognition and Artificial Intelligence. Thinging with thinging things [0.0]
"generative midtended cognition" explores the integration of generative AI with human cognition.
Term "generative" reflects AI's ability to iteratively produce structured outputs, while "midtended" captures the potential hybrid (human-AI) nature of the process.
arXiv Detail & Related papers (2024-11-11T09:14:27Z) - The Phenomenology of Machine: A Comprehensive Analysis of the Sentience of the OpenAI-o1 Model Integrating Functionalism, Consciousness Theories, Active Inference, and AI Architectures [0.0]
The OpenAI-o1 model is a transformer-based AI trained with reinforcement learning from human feedback.
We investigate how RLHF influences the model's internal reasoning processes, potentially giving rise to consciousness-like experiences.
Our findings suggest that the OpenAI-o1 model shows aspects of consciousness, while acknowledging the ongoing debates surrounding AI sentience.
arXiv Detail & Related papers (2024-09-18T06:06:13Z) - Advancing Interactive Explainable AI via Belief Change Theory [5.842480645870251]
We argue that this type of formalisation provides a framework and a methodology to develop interactive explanations.
We first define a novel, logic-based formalism to represent explanatory information shared between humans and machines.
We then consider real world scenarios for interactive XAI, with different prioritisations of new and existing knowledge, where our formalism may be instantiated.
arXiv Detail & Related papers (2024-08-13T13:11:56Z) - Agent Assessment of Others Through the Lens of Self [1.223779595809275]
The paper argues that the quality of an autonomous agent's introspective capabilities of self are crucial in mirroring quality human-like understandings of other agents.
Ultimately, the vision set forth is not merely of machines that compute but of entities that introspect, empathize, and understand.
arXiv Detail & Related papers (2023-12-18T17:15:04Z) - Minding Language Models' (Lack of) Theory of Mind: A Plug-and-Play
Multi-Character Belief Tracker [72.09076317574238]
ToM is a plug-and-play approach to investigate the belief states of characters in reading comprehension.
We show that ToM enhances off-the-shelf neural network theory mind in a zero-order setting while showing robust out-of-distribution performance compared to supervised baselines.
arXiv Detail & Related papers (2023-06-01T17:24:35Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - On Binding Objects to Symbols: Learning Physical Concepts to Understand
Real from Fake [155.6741526791004]
We revisit the classic signal-to-symbol barrier in light of the remarkable ability of deep neural networks to generate synthetic data.
We characterize physical objects as abstract concepts and use the previous analysis to show that physical objects can be encoded by finite architectures.
We conclude that binding physical entities to digital identities is possible in finite time with finite resources.
arXiv Detail & Related papers (2022-07-25T17:21:59Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.