Understanding Natural Language Understanding Systems. A Critical
Analysis
- URL: http://arxiv.org/abs/2303.04229v1
- Date: Wed, 1 Mar 2023 08:32:55 GMT
- Title: Understanding Natural Language Understanding Systems. A Critical
Analysis
- Authors: Alessandro Lenci
- Abstract summary: The development of machines that guillemotlefttalk like usguillemotright, also known as Natural Language Understanding (NLU) systems, is the Holy Grail of Artificial Intelligence (AI)
But never has the trust that we can build guillemotlefttalking machinesguillemotright been stronger than the one engendered by the last generation of NLU systems.
Are we at the dawn of a new era, in which the Grail is finally closer to us?
- Score: 91.81211519327161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The development of machines that {\guillemotleft}talk like
us{\guillemotright}, also known as Natural Language Understanding (NLU)
systems, is the Holy Grail of Artificial Intelligence (AI), since language is
the quintessence of human intelligence. The brief but intense life of NLU
research in AI and Natural Language Processing (NLP) is full of ups and downs,
with periods of high hopes that the Grail is finally within reach, typically
followed by phases of equally deep despair and disillusion. But never has the
trust that we can build {\guillemotleft}talking machines{\guillemotright} been
stronger than the one engendered by the last generation of NLU systems. But is
it gold all that glitters in AI? do state-of-the-art systems possess something
comparable to the human knowledge of language? Are we at the dawn of a new era,
in which the Grail is finally closer to us? In fact, the latest achievements of
AI systems have sparkled, or better renewed, an intense scientific debate on
their true language understanding capabilities. Some defend the idea that, yes,
we are on the right track, despite the limits that computational models still
show. Others are instead radically skeptic and even dismissal: The present
limits are not just contingent and temporary problems of NLU systems, but the
sign of the intrinsic inadequacy of the epistemological and technological
paradigm grounding them. This paper aims at contributing to such debate by
carrying out a critical analysis of the linguistic abilities of the most recent
NLU systems. I contend that they incorporate important aspects of the way
language is learnt and processed by humans, but at the same time they lack key
interpretive and inferential skills that it is unlikely they can attain unless
they are integrated with structured knowledge and the ability to exploit it for
language use.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Cognition is All You Need -- The Next Layer of AI Above Large Language
Models [0.0]
We present Cognitive AI, a framework for neurosymbolic cognition outside of large language models.
We propose that Cognitive AI is a necessary precursor for the evolution of the forms of AI, such as AGI, and specifically claim that AGI cannot be achieved by probabilistic approaches on their own.
We conclude with a discussion of the implications for large language models, adoption cycles in AI, and commercial Cognitive AI development.
arXiv Detail & Related papers (2024-03-04T16:11:57Z) - AI for Mathematics: A Cognitive Science Perspective [86.02346372284292]
Mathematics is one of the most powerful conceptual systems developed and used by the human species.
Rapid progress in AI, particularly propelled by advances in large language models (LLMs), has sparked renewed, widespread interest in building such systems.
arXiv Detail & Related papers (2023-10-19T02:00:31Z) - Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - A Review on Objective-Driven Artificial Intelligence [0.0]
Humans have an innate ability to understand context, nuances, and subtle cues in communication.
Humans possess a vast repository of common-sense knowledge that helps us make logical inferences and predictions about the world.
Machines lack this innate understanding and often struggle with making sense of situations that humans find trivial.
arXiv Detail & Related papers (2023-08-20T02:07:42Z) - Getting from Generative AI to Trustworthy AI: What LLMs might learn from
Cyc [0.0]
Generative AI, the most popular current approach to AI, consists of large language models (LLMs) that are trained to produce outputs that are plausible, but not necessarily correct.
We discuss an alternative approach to AI which could theoretically address many of the limitations associated with current approaches.
arXiv Detail & Related papers (2023-07-31T16:29:28Z) - Towards AGI in Computer Vision: Lessons Learned from GPT and Large
Language Models [98.72986679502871]
Chat systems powered by large language models (LLMs) emerge and rapidly become a promising direction to achieve artificial general intelligence (AGI)
But the path towards AGI in computer vision (CV) remains unclear.
We imagine a pipeline that puts a CV algorithm in world-scale, interactable environments, pre-trains it to predict future frames with respect to its action, and then fine-tunes it with instruction to accomplish various tasks.
arXiv Detail & Related papers (2023-06-14T17:15:01Z) - Mindstorms in Natural Language-Based Societies of Mind [110.05229611910478]
Minsky's "society of mind" and Schmidhuber's "learning to think" inspire diverse societies of large multimodal neural networks (NNs)
Recent implementations of NN-based societies of minds consist of large language models (LLMs) and other NN-based experts communicating through a natural language interface.
In these natural language-based societies of mind (NLSOMs), new agents -- all communicating through the same universal symbolic language -- are easily added in a modular fashion.
arXiv Detail & Related papers (2023-05-26T16:21:25Z) - Talking About Large Language Models [7.005266019853958]
The more adept large language models become, the more vulnerable we become to anthropomorphism.
This paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work.
The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence.
arXiv Detail & Related papers (2022-12-07T10:01:44Z) - Is Neuro-Symbolic AI Meeting its Promise in Natural Language Processing?
A Structured Review [2.064612766965483]
Advocates for Neuro-Symbolic AI (NeSy) assert that combining deep learning with symbolic reasoning will lead to stronger AI.
We conduct a structured review of studies implementing NeSy for NLP, challenges and future directions.
We aim to answer the question of whether NeSy is indeed meeting its promises: reasoning, out-of-distribution generalization, interpretability, learning and reasoning from small data, and transferability to new domains.
arXiv Detail & Related papers (2022-02-24T17:13:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.