Acting as Inverse Inverse Planning
- URL: http://arxiv.org/abs/2305.16913v1
- Date: Fri, 26 May 2023 13:26:36 GMT
- Title: Acting as Inverse Inverse Planning
- Authors: Kartik Chandra, Tzu-Mao Li, Josh Tenenbaum, Jonathan Ragan-Kelley
- Abstract summary: We offer a novel computational framework for such tools.
To simulate the audience, we borrow an established principle from cognitive science.
We treat storytelling as "*inverse* inverse planning," the task of choosing actions to manipulate an inverse planner's inferences.
- Score: 19.267798639508946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Great storytellers know how to take us on a journey. They direct characters
to act -- not necessarily in the most rational way -- but rather in a way that
leads to interesting situations, and ultimately creates an impactful experience
for audience members looking on.
If audience experience is what matters most, then can we help artists and
animators *directly* craft such experiences, independent of the concrete
character actions needed to evoke those experiences? In this paper, we offer a
novel computational framework for such tools. Our key idea is to optimize
animations with respect to *simulated* audience members' experiences. To
simulate the audience, we borrow an established principle from cognitive
science: that human social intuition can be modeled as "inverse planning," the
task of inferring an agent's (hidden) goals from its (observed) actions.
Building on this model, we treat storytelling as "*inverse* inverse planning,"
the task of choosing actions to manipulate an inverse planner's inferences. Our
framework is grounded in literary theory, naturally capturing many storytelling
elements from first principles. We give a series of examples to demonstrate
this, with supporting evidence from human subject studies.
Related papers
- Generating Visual Stories with Grounded and Coreferent Characters [63.07511918366848]
We present the first model capable of predicting visual stories with consistently grounded and coreferent character mentions.
Our model is finetuned on a new dataset which we build on top of the widely used VIST benchmark.
We also propose new evaluation metrics to measure the richness of characters and coreference in stories.
arXiv Detail & Related papers (2024-09-20T14:56:33Z) - HEART-felt Narratives: Tracing Empathy and Narrative Style in Personal Stories with LLMs [30.636456219922906]
Empathy serves as a cornerstone in enabling prosocial behaviors, and can be evoked through sharing of personal experiences in stories.
While empathy is influenced by narrative content, intuitively, people respond to the way a story is told as well, through narrative style.
We empirically examine and quantify this relationship between style and empathy using LLMs and large-scale crowdsourcing studies.
arXiv Detail & Related papers (2024-05-27T20:00:38Z) - Character-LLM: A Trainable Agent for Role-Playing [67.35139167985008]
Large language models (LLMs) can be used to serve as agents to simulate human behaviors.
We introduce Character-LLM that teach LLMs to act as specific people such as Beethoven, Queen Cleopatra, Julius Caesar, etc.
arXiv Detail & Related papers (2023-10-16T07:58:56Z) - Story Shaping: Teaching Agents Human-like Behavior with Stories [9.649246837532417]
We introduce Story Shaping, in which a reinforcement learning agent infers tacit knowledge from an exemplar story of how to accomplish a task.
An intrinsic reward is generated based on the similarity between the agent's inferred world state graph and the inferred story world graph.
We conducted experiments in text-based games requiring commonsense reasoning and shaping the behaviors of agents as virtual game characters.
arXiv Detail & Related papers (2023-01-24T16:19:09Z) - Great Expectations: Unsupervised Inference of Suspense, Surprise and
Salience in Storytelling [3.42658286826597]
The thesis trains a series of deep learning models via only reading stories, a self-supervised (or unsupervised) system.
Narrative theory methods are applied to the knowledge built into deep learning models to directly infer salience, surprise, and salience in stories.
arXiv Detail & Related papers (2022-06-20T11:00:23Z) - Computational Storytelling and Emotions: A Survey [56.95572957863576]
This survey paper is intended to summarize and contribute to the development of research being conducted on the relationship between stories and emotions.
We believe creativity research is not to replace humans with computers, but to find a way of collaboration between humans and computers to enhance the creativity.
arXiv Detail & Related papers (2022-05-23T00:21:59Z) - Persona-Guided Planning for Controlling the Protagonist's Persona in
Story Generation [71.24817035071176]
We propose a planning-based generation model named CONPER to explicitly model the relationship between personas and events.
Both automatic and manual evaluation results demonstrate that CONPER outperforms state-of-the-art baselines for generating more coherent and persona-controllable stories.
arXiv Detail & Related papers (2022-04-22T13:45:02Z) - TVShowGuess: Character Comprehension in Stories as Speaker Guessing [23.21452223968301]
We propose a new task for assessing machines' skills of understanding fictional characters in narrative stories.
The task, TVShowGuess, builds on the scripts of TV series and takes the form of guessing the anonymous main characters based on the backgrounds of the scenes and the dialogues.
Our human study supports that this form of task covers comprehension of multiple types of character persona, including understanding characters' personalities, facts and memories of personal experience.
arXiv Detail & Related papers (2022-04-16T05:15:04Z) - A-ACT: Action Anticipation through Cycle Transformations [89.83027919085289]
We take a step back to analyze how the human capability to anticipate the future can be transferred to machine learning algorithms.
A recent study on human psychology explains that, in anticipating an occurrence, the human brain counts on both systems.
In this work, we study the impact of each system for the task of action anticipation and introduce a paradigm to integrate them in a learning framework.
arXiv Detail & Related papers (2022-04-02T21:50:45Z) - Shaping embodied agent behavior with activity-context priors from
egocentric video [102.0541532564505]
We introduce an approach to discover activity-context priors from in-the-wild egocentric video captured with human worn cameras.
We encode our video-based prior as an auxiliary reward function that encourages an agent to bring compatible objects together before attempting an interaction.
We demonstrate our idea using egocentric EPIC-Kitchens video of people performing unscripted kitchen activities to benefit virtual household robotic agents performing various complex tasks in AI2-iTHOR.
arXiv Detail & Related papers (2021-10-14T20:02:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.