Machines of Meaning
- URL: http://arxiv.org/abs/2412.07975v1
- Date: Tue, 10 Dec 2024 23:23:28 GMT
- Title: Machines of Meaning
- Authors: Davide Nunes, Luis Antunes,
- Abstract summary: We discuss the challenges in the specification of "machines of meaning"
We highlight the need for detachment from anthropocentrism in the study of machines of meaning.
We propose a view of "meaning" to facilitate the discourse around approaches such as neural language models.
- Score: 0.0
- License:
- Abstract: One goal of Artificial Intelligence is to learn meaningful representations for natural language expressions, but what this entails is not always clear. A variety of new linguistic behaviours present themselves embodied as computers, enhanced humans, and collectives with various kinds of integration and communication. But to measure and understand the behaviours generated by such systems, we must clarify the language we use to talk about them. Computational models are often confused with the phenomena they try to model and shallow metaphors are used as justifications for (or to hype) the success of computational techniques on many tasks related to natural language; thus implying their progress toward human-level machine intelligence without ever clarifying what that means. This paper discusses the challenges in the specification of "machines of meaning", machines capable of acquiring meaningful semantics from natural language in order to achieve their goals. We characterize "meaning" in a computational setting, while highlighting the need for detachment from anthropocentrism in the study of the behaviour of machines of meaning. The pressing need to analyse AI risks and ethics requires a proper measurement of its capabilities which cannot be productively studied and explained while using ambiguous language. We propose a view of "meaning" to facilitate the discourse around approaches such as neural language models and help broaden the research perspectives for technology that facilitates dialogues between humans and machines.
Related papers
- Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - From Word Models to World Models: Translating from Natural Language to
the Probabilistic Language of Thought [124.40905824051079]
We propose rational meaning construction, a computational framework for language-informed thinking.
We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought.
We show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings.
We extend our framework to integrate cognitively-motivated symbolic modules.
arXiv Detail & Related papers (2023-06-22T05:14:00Z) - On the Computation of Meaning, Language Models and Incomprehensible Horrors [0.0]
We integrate foundational theories of meaning with a mathematical formalism of artificial general intelligence (AGI)
Our findings shed light on the relationship between meaning and intelligence, and how we can build machines that comprehend and intend meaning.
arXiv Detail & Related papers (2023-04-25T09:41:00Z) - Is it possible not to cheat on the Turing Test: Exploring the potential
and challenges for true natural language 'understanding' by computers [0.0]
The area of natural language understanding in artificial intelligence claims to have been making great strides.
A comprehensive, interdisciplinary overview of current approaches and remaining challenges is yet to be carried out.
I unite all of these perspectives to unpack the challenges involved in reaching true (human-like) language understanding.
arXiv Detail & Related papers (2022-06-29T14:19:48Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - My Teacher Thinks The World Is Flat! Interpreting Automatic Essay
Scoring Mechanism [71.34160809068996]
Recent work shows that automated scoring systems are prone to even common-sense adversarial samples.
We utilize recent advances in interpretability to find the extent to which features such as coherence, content and relevance are important for automated scoring mechanisms.
We also find that since the models are not semantically grounded with world-knowledge and common sense, adding false facts such as the world is flat'' actually increases the score instead of decreasing it.
arXiv Detail & Related papers (2020-12-27T06:19:20Z) - Machine Semiotics [0.0]
For speech assistive devices, the learning of machine-specific meanings of human utterances appears to be sufficient.
Using the quite trivial example of a cognitive heating device, we show that this process can be formalized as the reinforcement learning of utterance-meaning pairs (UMP)
arXiv Detail & Related papers (2020-08-24T15:49:54Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z) - Machine Learning in Artificial Intelligence: Towards a Common
Understanding [0.0]
We aim to clarify the relationship between "machine learning" and "artificial intelligence"
We present a conceptual framework which clarifies the role of machine learning to build (artificial) intelligent agents.
arXiv Detail & Related papers (2020-03-27T19:09:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.