A Rate-Distortion view of human pragmatic reasoning
- URL: http://arxiv.org/abs/2005.06641v1
- Date: Wed, 13 May 2020 22:04:27 GMT
- Title: A Rate-Distortion view of human pragmatic reasoning
- Authors: Noga Zaslavsky, Jennifer Hu, Roger P. Levy
- Abstract summary: We present a novel analysis of the Rational Speech Act (RSA) framework.
We show that RSA implements an alternating for optimizing a tradeoff between expected utility and communicative effort.
This work furthers the mathematical understanding of RSA models, and suggests that general information-theoretic principles may give rise to human pragmatic reasoning.
- Score: 3.9425618017443322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: What computational principles underlie human pragmatic reasoning? A prominent
approach to pragmatics is the Rational Speech Act (RSA) framework, which
formulates pragmatic reasoning as probabilistic speakers and listeners
recursively reasoning about each other. While RSA enjoys broad empirical
support, it is not yet clear whether the dynamics of such recursive reasoning
may be governed by a general optimization principle. Here, we present a novel
analysis of the RSA framework that addresses this question. First, we show that
RSA recursion implements an alternating maximization for optimizing a tradeoff
between expected utility and communicative effort. On that basis, we study the
dynamics of RSA recursion and disconfirm the conjecture that expected utility
is guaranteed to improve with recursion depth. Second, we show that RSA can be
grounded in Rate-Distortion theory, while maintaining a similar ability to
account for human behavior and avoiding a bias of RSA toward random utterance
production. This work furthers the mathematical understanding of RSA models,
and suggests that general information-theoretic principles may give rise to
human pragmatic reasoning.
Related papers
- The Foundations of Tokenization: Statistical and Computational Concerns [51.370165245628975]
Tokenization is a critical step in the NLP pipeline.
Despite its recognized importance as a standard representation method in NLP, the theoretical underpinnings of tokenization are not yet fully understood.
The present paper contributes to addressing this theoretical gap by proposing a unified formal framework for representing and analyzing tokenizer models.
arXiv Detail & Related papers (2024-07-16T11:12:28Z) - Understanding Reasoning Ability of Language Models From the Perspective of Reasoning Paths Aggregation [110.71955853831707]
We view LMs as deriving new conclusions by aggregating indirect reasoning paths seen at pre-training time.
We formalize the reasoning paths as random walk paths on the knowledge/reasoning graphs.
Experiments and analysis on multiple KG and CoT datasets reveal the effect of training on random walk paths.
arXiv Detail & Related papers (2024-02-05T18:25:51Z) - A Simple Generative Model of Logical Reasoning and Statistical Learning [0.6853165736531939]
Statistical learning and logical reasoning are two major fields of AI expected to be unified for human-like machine intelligence.
We here propose a simple Bayesian model of logical reasoning and statistical learning.
We simply model how data causes symbolic knowledge in terms of its satisfiability in formal logic.
arXiv Detail & Related papers (2023-05-18T16:34:51Z) - Pragmatic Reasoning in Structured Signaling Games [2.28438857884398]
We introduce a structured signaling game, an extension of the classical signaling game with a similarity structure between meanings in the context.
We show that pragmatic agents using sRSA on top of semantic representations attain efficiency very close to the information theoretic limit.
We also explore the interaction between pragmatic reasoning and learning in multi-agent reinforcement learning framework.
arXiv Detail & Related papers (2023-05-17T12:43:29Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Exhaustivity and anti-exhaustivity in the RSA framework: Testing the
effect of prior beliefs [68.8204255655161]
We focus on cases when sensitivity to priors leads to counterintuitive predictions of the Rational Speech Act (RSA) framework.
We show that in the baseline RSA model, under certain conditions, anti-exhaustive readings are predicted.
We find no anti-exhaustivity effects, but observed that message choice is sensitive to priors, as predicted by the RSA framework overall.
arXiv Detail & Related papers (2022-02-14T20:35:03Z) - Causal Inference Principles for Reasoning about Commonsense Causality [93.19149325083968]
Commonsense causality reasoning aims at identifying plausible causes and effects in natural language descriptions that are deemed reasonable by an average person.
Existing work usually relies on deep language models wholeheartedly, and is potentially susceptible to confounding co-occurrences.
Motivated by classical causal principles, we articulate the central question of CCR and draw parallels between human subjects in observational studies and natural languages.
We propose a novel framework, ROCK, to Reason O(A)bout Commonsense K(C)ausality, which utilizes temporal signals as incidental supervision.
arXiv Detail & Related papers (2022-01-31T06:12:39Z) - Scalable pragmatic communication via self-supervision [14.01704261285015]
We propose an architecture and learning process in which agents acquire pragmatic policies via self-supervision instead of imitating human data.
This work suggests a new principled approach for equipping artificial agents with pragmatic skills via self-supervision.
arXiv Detail & Related papers (2021-08-12T15:28:30Z) - Relational reasoning and generalization using non-symbolic neural
networks [66.07793171648161]
Previous work suggested that neural networks were not suitable models of human relational reasoning because they could not represent mathematically identity, the most basic form of equality.
We find neural networks are able to learn basic equality (mathematical identity), (2) sequential equality problems (learning ABA-patterned sequences) with only positive training instances, and (3) a complex, hierarchical equality problem with only basic equality training instances.
These results suggest that essential aspects of symbolic reasoning can emerge from data-driven, non-symbolic learning processes.
arXiv Detail & Related papers (2020-06-14T18:25:42Z) - Learning to refer informatively by amortizing pragmatic reasoning [35.71540493379324]
We explore the idea that speakers might learn to amortize the cost of Rational Speech Acts over time.
We find that our amortized model is able to quickly generate language that is effective and concise across a range of contexts.
arXiv Detail & Related papers (2020-05-31T02:52:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.