(Re)construing Meaning in NLP
- URL: http://arxiv.org/abs/2005.09099v1
- Date: Mon, 18 May 2020 21:21:34 GMT
- Title: (Re)construing Meaning in NLP
- Authors: Sean Trott, Tiago Timponi Torrent, Nancy Chang, Nathan Schneider
- Abstract summary: We show that the way something is expressed reflects different ways of conceptualizing or construing the information being conveyed.
We show how insights from construal could inform theoretical and practical work in NLP.
- Score: 15.37817898307963
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human speakers have an extensive toolkit of ways to express themselves. In
this paper, we engage with an idea largely absent from discussions of meaning
in natural language understanding--namely, that the way something is expressed
reflects different ways of conceptualizing or construing the information being
conveyed. We first define this phenomenon more precisely, drawing on
considerable prior work in theoretical cognitive semantics and
psycholinguistics. We then survey some dimensions of construed meaning and show
how insights from construal could inform theoretical and practical work in NLP.
Related papers
- Learning Interpretable Concepts: Unifying Causal Representation Learning
and Foundation Models [51.43538150982291]
We study how to learn human-interpretable concepts from data.
Weaving together ideas from both fields, we show that concepts can be provably recovered from diverse data.
arXiv Detail & Related papers (2024-02-14T15:23:59Z) - ConcEPT: Concept-Enhanced Pre-Training for Language Models [57.778895980999124]
ConcEPT aims to infuse conceptual knowledge into pre-trained language models.
It exploits external entity concept prediction to predict the concepts of entities mentioned in the pre-training contexts.
Results of experiments show that ConcEPT gains improved conceptual knowledge with concept-enhanced pre-training.
arXiv Detail & Related papers (2024-01-11T05:05:01Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - On the Computation of Meaning, Language Models and Incomprehensible Horrors [0.0]
We integrate foundational theories of meaning with a mathematical formalism of artificial general intelligence (AGI)
Our findings shed light on the relationship between meaning and intelligence, and how we can build machines that comprehend and intend meaning.
arXiv Detail & Related papers (2023-04-25T09:41:00Z) - Conceptual structure coheres in human cognition but not in large
language models [7.405352374343134]
We show that conceptual structure is robust to differences in culture, language, and method of estimation.
Results highlight an important difference between contemporary large language models and human cognition.
arXiv Detail & Related papers (2023-04-05T21:27:01Z) - ConceptX: A Framework for Latent Concept Analysis [21.760620298330235]
We present ConceptX, a human-in-the-loop framework for interpreting and annotating latent representational space in Language Models (pLMs)
We use an unsupervised method to discover concepts learned in these models and enable a graphical interface for humans to generate explanations for the concepts.
arXiv Detail & Related papers (2022-11-12T11:31:09Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - Representing Inferences and their Lexicalization [7.081604594416339]
The meaning of a word is taken to be the entities, predications, presuppositions, and potential inferences that it adds to an ongoing situation.
As words compose, the minimal model in the situation evolves to limit and direct inference.
arXiv Detail & Related papers (2021-12-14T19:23:43Z) - On Semantic Cognition, Inductive Generalization, and Language Models [0.2538209532048867]
My research focuses on understanding semantic knowledge in neural network models trained solely to predict natural language (referred to as language models, or LMs)
I propose a framework inspired by 'inductive reasoning,' a phenomenon that sheds light on how humans utilize background knowledge to make inductive leaps and generalize from new pieces of information about concepts and their properties.
arXiv Detail & Related papers (2021-11-04T03:19:52Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Semantics-Aware Inferential Network for Natural Language Understanding [79.70497178043368]
We propose a Semantics-Aware Inferential Network (SAIN) to meet such a motivation.
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues.
Our model achieves significant improvement on 11 tasks including machine reading comprehension and natural language inference.
arXiv Detail & Related papers (2020-04-28T07:24:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.