The Term 'Agent' Has Been Diluted Beyond Utility and Requires Redefinition
- URL: http://arxiv.org/abs/2508.05338v1
- Date: Thu, 07 Aug 2025 12:40:25 GMT
- Title: The Term 'Agent' Has Been Diluted Beyond Utility and Requires Redefinition
- Authors: Brinnae Bent,
- Abstract summary: The term 'agent' in artificial intelligence has long carried multiple interpretations across different subfields.<n>Recent developments in AI capabilities, particularly in large language model systems, have amplified this ambiguity.<n>We propose a framework that defines clear minimum requirements for a system to be considered an agent.
- Score: 0.40792653193642503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The term 'agent' in artificial intelligence has long carried multiple interpretations across different subfields. Recent developments in AI capabilities, particularly in large language model systems, have amplified this ambiguity, creating significant challenges in research communication, system evaluation and reproducibility, and policy development. This paper argues that the term 'agent' requires redefinition. Drawing from historical analysis and contemporary usage patterns, we propose a framework that defines clear minimum requirements for a system to be considered an agent while characterizing systems along a multidimensional spectrum of environmental interaction, learning and adaptation, autonomy, goal complexity, and temporal coherence. This approach provides precise vocabulary for system description while preserving the term's historically multifaceted nature. After examining potential counterarguments and implementation challenges, we provide specific recommendations for moving forward as a field, including suggestions for terminology standardization and framework adoption. The proposed approach offers practical tools for improving research clarity and reproducibility while supporting more effective policy development.
Related papers
- Decoding the Multimodal Maze: A Systematic Review on the Adoption of Explainability in Multimodal Attention-based Models [0.0]
This systematic literature review analyzes research published between January 2020 and early 2024 that focuses on the explainability of multimodal models.<n>We find that evaluation methods for XAI in multimodal settings are largely non-systematic, lacking consistency, robustness, and consideration for modality-specific cognitive and contextual factors.
arXiv Detail & Related papers (2025-08-06T13:14:20Z) - Deep Research Agents: A Systematic Examination And Roadmap [79.04813794804377]
Deep Research (DR) agents are designed to tackle complex, multi-turn informational research tasks.<n>In this paper, we conduct a detailed analysis of the foundational technologies and architectural components that constitute DR agents.
arXiv Detail & Related papers (2025-06-22T16:52:48Z) - Erasing Concepts, Steering Generations: A Comprehensive Survey of Concept Suppression [10.950528923845955]
Un uncontrolled reproduction of sensitive, copyrighted, or harmful imagery poses serious ethical, legal, and safety challenges.<n>The concept erasure paradigm has emerged as a promising direction, enabling the selective removal of specific semantic concepts from generative models.<n>This survey aims to guide researchers toward safer, more ethically aligned generative models.
arXiv Detail & Related papers (2025-05-26T01:24:34Z) - Disambiguation in Conversational Question Answering in the Era of LLM: A Survey [36.37587894344511]
Ambiguity remains a fundamental challenge in Natural Language Processing (NLP)<n>With the advent of Large Language Models (LLMs), addressing ambiguity has become even more critical due to their expanded capabilities and applications.<n>This paper explores the definition, forms, and implications of ambiguity for language driven systems.
arXiv Detail & Related papers (2025-05-18T20:53:41Z) - Large Language Models Meet Stance Detection: A Survey of Tasks, Methods, Applications, Challenges and Future Directions [0.37865171120254354]
Stance detection is essential for understanding subjective content across various platforms such as social media, news articles, and online reviews.<n>Recent advances in Large Language Models (LLMs) have revolutionized stance detection by introducing novel capabilities.<n>We present a novel taxonomy for LLM-based stance detection approaches, structured along three key dimensions.<n>Key applications in stance detection, political analysis, public health monitoring, and social media moderation are discussed.
arXiv Detail & Related papers (2025-05-13T11:47:49Z) - A Desideratum for Conversational Agents: Capabilities, Challenges, and Future Directions [51.96890647837277]
Large Language Models (LLMs) have propelled conversational AI from traditional dialogue systems into sophisticated agents capable of autonomous actions, contextual awareness, and multi-turn interactions with users.<n>This survey paper presents a desideratum for next-generation Conversational Agents - what has been achieved, what challenges persist, and what must be done for more scalable systems that approach human-level intelligence.
arXiv Detail & Related papers (2025-04-07T21:01:25Z) - Why Reasoning Matters? A Survey of Advancements in Multimodal Reasoning (v1) [66.51642638034822]
Reasoning is central to human intelligence, enabling structured problem-solving across diverse tasks.<n>Recent advances in large language models (LLMs) have greatly enhanced their reasoning abilities in arithmetic, commonsense, and symbolic domains.<n>This paper offers a concise yet insightful overview of reasoning techniques in both textual and multimodal LLMs.
arXiv Detail & Related papers (2025-04-04T04:04:56Z) - Unified Generative and Discriminative Training for Multi-modal Large Language Models [88.84491005030316]
Generative training has enabled Vision-Language Models (VLMs) to tackle various complex tasks.
Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval.
This paper proposes a unified approach that integrates the strengths of both paradigms.
arXiv Detail & Related papers (2024-11-01T01:51:31Z) - Hierarchical Reinforcement Learning for Temporal Abstraction of Listwise Recommendation [51.06031200728449]
We propose a novel framework called mccHRL to provide different levels of temporal abstraction on listwise recommendation.<n>Within the hierarchical framework, the high-level agent studies the evolution of user perception, while the low-level agent produces the item selection policy.<n>Results observe significant performance improvement by our method, compared with several well-known baselines.
arXiv Detail & Related papers (2024-09-11T17:01:06Z) - Towards Rationality in Language and Multimodal Agents: A Survey [23.451887560567602]
This work discusses how to build more rational language and multimodal agents.<n> Rationality is quality of being guided by reason, characterized by decision-making that aligns with evidence and logical principles.
arXiv Detail & Related papers (2024-06-01T01:17:25Z) - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models [52.24001776263608]
This comprehensive survey delves into the recent strides in HS moderation.
We highlight the burgeoning role of large language models (LLMs) and large multimodal models (LMMs)
We identify existing gaps in research, particularly in the context of underrepresented languages and cultures.
arXiv Detail & Related papers (2024-01-30T03:51:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.