Cognitively Inspired Components for Social Conversational Agents
- URL: http://arxiv.org/abs/2311.05450v1
- Date: Thu, 9 Nov 2023 15:38:58 GMT
- Title: Cognitively Inspired Components for Social Conversational Agents
- Authors: Alex Clay, Eduardo Alonso, Esther Mondrag\'on
- Abstract summary: Two key categories of problem remain for conversational agents (CAs)
technical problems resulting from the approach taken in creating the CA, such as scope with retrieval agents and the often nonsensical answers of former generative agents.
Humans perceive CAs as social actors, and as a result expect the CA to adhere to social convention.
This paper presents a survey highlighting a potential solution to both categories of problem through the introduction of cognitively inspired additions to the CA.
- Score: 2.1408617023874443
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current conversational agents (CA) have seen improvement in conversational
quality in recent years due to the influence of large language models (LLMs)
like GPT3. However, two key categories of problem remain. Firstly there are the
unique technical problems resulting from the approach taken in creating the CA,
such as scope with retrieval agents and the often nonsensical answers of former
generative agents. Secondly, humans perceive CAs as social actors, and as a
result expect the CA to adhere to social convention. Failure on the part of the
CA in this respect can lead to a poor interaction and even the perception of
threat by the user. As such, this paper presents a survey highlighting a
potential solution to both categories of problem through the introduction of
cognitively inspired additions to the CA. Through computational facsimiles of
semantic and episodic memory, emotion, working memory, and the ability to
learn, it is possible to address both the technical and social problems
encountered by CAs.
Related papers
- Toward Safe Evolution of Artificial Intelligence (AI) based Conversational Agents to Support Adolescent Mental and Sexual Health Knowledge Discovery [0.22530496464901104]
We discuss the current landscape and opportunities for Conversation Agents (CAs) to support adolescents' mental and sexual health knowledge discovery.
We call for a discourse on how to set guardrails for the safe evolution of AI-based CAs for adolescents.
arXiv Detail & Related papers (2024-04-03T19:18:25Z) - Understanding Public Perceptions of AI Conversational Agents: A
Cross-Cultural Analysis [22.93365830074122]
Conversational Agents (CAs) have increasingly been integrated into everyday life, sparking significant discussions on social media.
This study used computational methods to analyze about one million social media discussions surrounding CAs.
We find Chinese participants tended to view CAs hedonically, perceived voice-based and physically embodied CAs as warmer and more competent.
arXiv Detail & Related papers (2024-02-25T09:34:22Z) - Sim-to-Real Causal Transfer: A Metric Learning Approach to
Causally-Aware Interaction Representations [62.48505112245388]
We take an in-depth look at the causal awareness of modern representations of agent interactions.
We show that recent representations are already partially resilient to perturbations of non-causal agents.
We propose a metric learning approach that regularizes latent representations with causal annotations.
arXiv Detail & Related papers (2023-12-07T18:57:03Z) - Neural-Logic Human-Object Interaction Detection [67.4993347702353]
We present L OGIC HOI, a new HOI detector that leverages neural-logic reasoning and Transformer to infer feasible interactions between entities.
Specifically, we modify the self-attention mechanism in vanilla Transformer, enabling it to reason over the human, action, object> triplet and constitute novel interactions.
We formulate these two properties in first-order logic and ground them into continuous space to constrain the learning process of our approach, leading to improved performance and zero-shot generalization capabilities.
arXiv Detail & Related papers (2023-11-16T11:47:53Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - From human-centered to social-centered artificial intelligence: Assessing ChatGPT's impact through disruptive events [1.1858896428516252]
We argue that critiques of ChatGPT's impact in machine learning research communities have coalesced around its performance or other conventional safety evaluations relating to bias, toxicity, and "hallucination"
By analyzing ChatGPT's social impact through a social-centered framework, we challenge individualistic approaches in AI development and contribute to ongoing debates around the ethical and responsible deployment of AI systems.
arXiv Detail & Related papers (2023-05-31T22:46:48Z) - TalkTive: A Conversational Agent Using Backchannels to Engage Older
Adults in Neurocognitive Disorders Screening [51.97352212369947]
We analyzed 246 conversations of cognitive assessments between older adults and human assessors.
We derived the categories of reactive backchannels and proactive backchannels.
This is used in the development of TalkTive, a CA which can predict both timing and form of backchanneling.
arXiv Detail & Related papers (2022-02-16T17:55:34Z) - Adversarial Attacks in Cooperative AI [0.0]
Single-agent reinforcement learning algorithms in a multi-agent environment are inadequate for fostering cooperation.
Recent work in adversarial machine learning shows that models can be easily deceived into making incorrect decisions.
Cooperative AI might introduce new weaknesses not investigated in previous machine learning research.
arXiv Detail & Related papers (2021-11-29T07:34:12Z) - Retrieval Augmentation Reduces Hallucination in Conversation [49.35235945543833]
We explore the use of neural-retrieval-in-the-loop architectures for knowledge-grounded dialogue.
We show that our best models obtain state-of-the-art performance on two knowledge-grounded conversational tasks.
arXiv Detail & Related papers (2021-04-15T16:24:43Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.