Shared Visual Representations of Drawing for Communication: How do
different biases affect human interpretability and intent?
- URL: http://arxiv.org/abs/2110.08203v1
- Date: Fri, 15 Oct 2021 17:02:34 GMT
- Title: Shared Visual Representations of Drawing for Communication: How do
different biases affect human interpretability and intent?
- Authors: Daniela Mihai, Jonathon Hare
- Abstract summary: We show that a combination of powerful pretrained encoder networks, with appropriate inductive biases, can lead to agents that draw recognisable sketches.
We develop an approach to help automatically analyse the semantic content being conveyed by a sketch.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an investigation into how representational losses can affect the
drawings produced by artificial agents playing a communication game. Building
upon recent advances, we show that a combination of powerful pretrained encoder
networks, with appropriate inductive biases, can lead to agents that draw
recognisable sketches, whilst still communicating well. Further, we start to
develop an approach to help automatically analyse the semantic content being
conveyed by a sketch and demonstrate that current approaches to inducing
perceptual biases lead to a notion of objectness being a key feature despite
the agent training being self-supervised.
Related papers
- Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - Sim-to-Real Causal Transfer: A Metric Learning Approach to
Causally-Aware Interaction Representations [62.48505112245388]
We take an in-depth look at the causal awareness of modern representations of agent interactions.
We show that recent representations are already partially resilient to perturbations of non-causal agents.
We propose a metric learning approach that regularizes latent representations with causal annotations.
arXiv Detail & Related papers (2023-12-07T18:57:03Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Learning Intuitive Policies Using Action Features [7.260481131198059]
We investigate the effect of network architecture on the propensity of learning algorithms to exploit semantic relationships.
We find that attention-based architectures that jointly process a featurized representation of observations and actions have a better inductive bias for learning intuitive policies.
arXiv Detail & Related papers (2022-01-29T20:54:52Z) - Emergent Graphical Conventions in a Visual Communication Game [80.79297387339614]
Humans communicate with graphical sketches apart from symbolic languages.
We take the very first step to model and simulate such an evolution process via two neural agents playing a visual communication game.
We devise a novel reinforcement learning method such that agents are evolved jointly towards successful communication and abstract graphical conventions.
arXiv Detail & Related papers (2021-11-28T18:59:57Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - Interpretable agent communication from scratch(with a generic visual
processor emerging on the side) [29.722833768572805]
We train two deep nets from scratch to perform realistic referent identification through unsupervised emergent communication.
We show that the largely interpretable emergent protocol allows the nets to successfully communicate even about object types they did not see at training time.
Our results provide concrete evidence of the viability of (interpretable) emergent deep net communication in a more realistic scenario than previously considered.
arXiv Detail & Related papers (2021-06-08T11:32:11Z) - Learning to Draw: Emergent Communication through Sketching [0.0]
We show how agents can learn to communicate in order to collaboratively solve tasks.
Existing research has focused on language, with a learned communication channel transmitting sequences of discrete tokens between the agents.
Our agents are parameterised by deep neural networks, and the drawing procedure is differentiable, allowing for end-to-end training.
In the framework of a referential communication game, we demonstrate that agents can not only successfully learn to communicate by drawing, but with appropriate inductive biases, can do so in a fashion that humans can interpret.
arXiv Detail & Related papers (2021-06-03T18:17:55Z) - The emergence of visual semantics through communication games [0.0]
Communication systems which capture visual semantics can be learned in a completely self-supervised manner by playing the right types of game.
Our work bridges a gap between emergent communication research and self-supervised feature learning.
arXiv Detail & Related papers (2021-01-25T17:43:37Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.