Reflective Linguistic Programming (RLP): A Stepping Stone in
Socially-Aware AGI (SocialAGI)
- URL: http://arxiv.org/abs/2305.12647v1
- Date: Mon, 22 May 2023 02:43:15 GMT
- Title: Reflective Linguistic Programming (RLP): A Stepping Stone in
Socially-Aware AGI (SocialAGI)
- Authors: Kevin A. Fischer
- Abstract summary: This paper presents Reflective Linguistic Programming (RLP), a unique approach to conversational AI that emphasizes self-awareness and strategic planning.
RLP encourages models to introspect on their own predefined personality traits, emotional responses to incoming messages, and planned strategies, enabling contextually rich, coherent, and engaging interactions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents Reflective Linguistic Programming (RLP), a unique
approach to conversational AI that emphasizes self-awareness and strategic
planning. RLP encourages models to introspect on their own predefined
personality traits, emotional responses to incoming messages, and planned
strategies, enabling contextually rich, coherent, and engaging interactions. A
striking illustration of RLP's potential involves a toy example, an AI persona
with an adversarial orientation, a demon named `Bogus' inspired by the
children's fairy tale Hansel & Gretel. Bogus exhibits sophisticated behaviors,
such as strategic deception and sensitivity to user discomfort, that
spontaneously arise from the model's introspection and strategic planning.
These behaviors are not pre-programmed or prompted, but emerge as a result of
the model's advanced cognitive modeling. The potential applications of RLP in
socially-aware AGI (Social AGI) are vast, from nuanced negotiations and mental
health support systems to the creation of diverse and dynamic AI personas. Our
exploration of deception serves as a stepping stone towards a new frontier in
AGI, one filled with opportunities for advanced cognitive modeling and the
creation of truly human `digital souls'.
Related papers
- Artificial Theory of Mind and Self-Guided Social Organisation [1.8434042562191815]
One of the challenges artificial intelligence (AI) faces is how a collection of agents coordinate their behaviour to achieve goals that are not reachable by any single agent.
We make the case for collective intelligence in a general setting, drawing on recent work from single neuron complexity in neural networks.
We show how our social structures are influenced by our neuro-physiology, our psychology, and our language.
arXiv Detail & Related papers (2024-11-14T04:06:26Z) - Converging Paradigms: The Synergy of Symbolic and Connectionist AI in LLM-Empowered Autonomous Agents [55.63497537202751]
Article explores the convergence of connectionist and symbolic artificial intelligence (AI)
Traditionally, connectionist AI focuses on neural networks, while symbolic AI emphasizes symbolic representation and logic.
Recent advancements in large language models (LLMs) highlight the potential of connectionist architectures in handling human language as a form of symbols.
arXiv Detail & Related papers (2024-07-11T14:00:53Z) - Position Paper: Agent AI Towards a Holistic Intelligence [53.35971598180146]
We emphasize developing Agent AI -- an embodied system that integrates large foundation models into agent actions.
In this paper, we propose a novel large action model to achieve embodied intelligent behavior, the Agent Foundation Model.
arXiv Detail & Related papers (2024-02-28T16:09:56Z) - A call for embodied AI [1.7544885995294304]
We propose Embodied AI as the next fundamental step in the pursuit of Artificial General Intelligence.
By broadening the scope of Embodied AI, we introduce a theoretical framework based on cognitive architectures.
This framework is aligned with Friston's active inference principle, offering a comprehensive approach to EAI development.
arXiv Detail & Related papers (2024-02-06T09:11:20Z) - Rational Sensibility: LLM Enhanced Empathetic Response Generation Guided by Self-presentation Theory [8.439724621886779]
The development of Large Language Models (LLMs) provides human-centered Artificial General Intelligence (AGI) with a glimmer of hope.
Empathy serves as a key emotional attribute of humanity, playing an irreplaceable role in human-centered AGI.
In this paper, we design an innovative encoder module inspired by self-presentation theory in sociology, which specifically processes sensibility and rationality sentences in dialogues.
arXiv Detail & Related papers (2023-12-14T07:38:12Z) - Social Motion Prediction with Cognitive Hierarchies [19.71780279070757]
We introduce a new benchmark, a novel formulation, and a cognition-inspired framework.
We present Wusi, a 3D multi-person motion dataset under the context of team sports.
We develop a cognitive hierarchy framework to predict strategic human social interactions.
arXiv Detail & Related papers (2023-11-08T14:51:17Z) - The Neuro-Symbolic Inverse Planning Engine (NIPE): Modeling
Probabilistic Social Inferences from Linguistic Inputs [50.32802502923367]
We study the process of language driving and influencing social reasoning in a probabilistic goal inference domain.
We propose a neuro-symbolic model that carries out goal inference from linguistic inputs of agent scenarios.
Our model closely matches human response patterns and better predicts human judgements than using an LLM alone.
arXiv Detail & Related papers (2023-06-25T19:38:01Z) - Machine Psychology [54.287802134327485]
We argue that a fruitful direction for research is engaging large language models in behavioral experiments inspired by psychology.
We highlight theoretical perspectives, experimental paradigms, and computational analysis techniques that this approach brings to the table.
It paves the way for a "machine psychology" for generative artificial intelligence (AI) that goes beyond performance benchmarks.
arXiv Detail & Related papers (2023-03-24T13:24:41Z) - Neural Theory-of-Mind? On the Limits of Social Intelligence in Large LMs [77.88043871260466]
We show that one of today's largest language models lacks this kind of social intelligence out-of-the box.
We conclude that person-centric NLP approaches might be more effective towards neural Theory of Mind.
arXiv Detail & Related papers (2022-10-24T14:58:58Z) - Emotion Recognition in Conversation using Probabilistic Soft Logic [17.62924003652853]
emotion recognition in conversation (ERC) is a sub-field of emotion recognition that focuses on conversations that contain two or more utterances.
We implement our approach in a framework called Probabilistic Soft Logic (PSL), a declarative templating language.
PSL provides functionality for the incorporation of results from neural models into PSL models.
We compare our method with state-of-the-art purely neural ERC systems, and see almost a 20% improvement.
arXiv Detail & Related papers (2022-07-14T23:59:06Z) - Towards Socially Intelligent Agents with Mental State Transition and
Human Utility [97.01430011496576]
We propose to incorporate a mental state and utility model into dialogue agents.
The hybrid mental state extracts information from both the dialogue and event observations.
The utility model is a ranking model that learns human preferences from a crowd-sourced social commonsense dataset.
arXiv Detail & Related papers (2021-03-12T00:06:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.