Infinite use of finite means: Zero-Shot Generalization using
Compositional Emergent Protocols
- URL: http://arxiv.org/abs/2012.05011v2
- Date: Sun, 13 Dec 2020 04:19:33 GMT
- Title: Infinite use of finite means: Zero-Shot Generalization using
Compositional Emergent Protocols
- Authors: Rishi Hazra, Sonu Dixit, Sayambhu Sen
- Abstract summary: We show how intrinsic rewards can be leveraged in training agents to induce compositionality in absence of external feedback.
We introduce Comm-gSCAN, a platform for investigating grounded language acquisition in 2D-grid environments.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human language has been described as a system that makes use of finite means
to express an unlimited array of thoughts. Of particular interest is the aspect
of compositionality, whereby, the meaning of a complex, compound language
expression can be deduced from the meaning of its constituent parts. If
artificial agents can develop compositional communication protocols akin to
human language, they can be made to seamlessly generalize to unseen
combinations. However, the real question is, how do we induce compositionality
in emergent communication? Studies have recognized the role of curiosity in
enabling linguistic development in children. It is this same intrinsic urge
that drives us to master complex tasks with decreasing amounts of explicit
reward. In this paper, we seek to use this intrinsic feedback in inducing a
systematic and unambiguous protolanguage in artificial agents. We show in our
experiments, how these rewards can be leveraged in training agents to induce
compositionality in absence of any external feedback. Additionally, we
introduce Comm-gSCAN, a platform for investigating grounded language
acquisition in 2D-grid environments. Using this, we demonstrate how
compositionality can enable agents to not only interact with unseen objects,
but also transfer skills from one task to other in zero-shot (Can an agent,
trained to pull and push twice, pull twice?)
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - tagE: Enabling an Embodied Agent to Understand Human Instructions [3.943519623674811]
We introduce a novel system known as task and argument grounding for Embodied agents (tagE)
At its core, our system employs an inventive neural network model designed to extract a series of tasks from complex task instructions expressed in natural language.
Our proposed model adopts an encoder-decoder framework enriched with nested decoding to effectively extract tasks and their corresponding arguments from these intricate instructions.
arXiv Detail & Related papers (2023-10-24T08:17:48Z) - Are Representations Built from the Ground Up? An Empirical Examination
of Local Composition in Language Models [91.3755431537592]
Representing compositional and non-compositional phrases is critical for language understanding.
We first formulate a problem of predicting the LM-internal representations of longer phrases given those of their constituents.
While we would expect the predictive accuracy to correlate with human judgments of semantic compositionality, we find this is largely not the case.
arXiv Detail & Related papers (2022-10-07T14:21:30Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Emergent Communication of Generalizations [13.14792537601313]
We argue that communicating about a single object in a shared visual context is prone to overfitting and does not encourage language useful beyond concrete reference.
We propose games that require communicating generalizations over sets of objects representing abstract visual concepts.
We find that these games greatly improve systematicity and interpretability of the learned languages.
arXiv Detail & Related papers (2021-06-04T19:02:18Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Zero-Shot Generalization using Intrinsically Motivated Compositional
Emergent Protocols [0.0]
We show how compositionality can enable agents to not only interact with unseen objects but also transfer skills from one task to another in a zero-shot setting.
We demonstrate how compositionality can enable agents to not only interact with unseen objects but also transfer skills from one task to another in a zero-shot setting.
arXiv Detail & Related papers (2021-05-11T14:20:26Z) - Emergence of Pragmatics from Referential Game between Theory of Mind
Agents [64.25696237463397]
We propose an algorithm, using which agents can spontaneously learn the ability to "read between lines" without any explicit hand-designed rules.
We integrate the theory of mind (ToM) in a cooperative multi-agent pedagogical situation and propose an adaptive reinforcement learning (RL) algorithm to develop a communication protocol.
arXiv Detail & Related papers (2020-01-21T19:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.