Machine Learning for Utility Prediction in Argument-Based Computational
Persuasion
- URL: http://arxiv.org/abs/2112.04953v1
- Date: Thu, 9 Dec 2021 14:28:54 GMT
- Title: Machine Learning for Utility Prediction in Argument-Based Computational
Persuasion
- Authors: Ivan Donadello, Anthony Hunter, Stefano Teso, Mauro Dragoni
- Abstract summary: In real applications, such as for healthcare, it is unlikely the utility of the outcome of the dialogue will be the same, or the exact opposite, for the APS and user.
We develop two Machine Learning methods, that leverage information coming from the users to predict their utilities.
We evaluate EAI and EDS in a simulation setting and in a realistic case study concerning healthy eating habits.
- Score: 17.214664783818677
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated persuasion systems (APS) aim to persuade a user to believe
something by entering into a dialogue in which arguments and counterarguments
are exchanged. To maximize the probability that an APS is successful in
persuading a user, it can identify a global policy that will allow it to select
the best arguments it presents at each stage of the dialogue whatever arguments
the user presents. However, in real applications, such as for healthcare, it is
unlikely the utility of the outcome of the dialogue will be the same, or the
exact opposite, for the APS and user. In order to deal with this situation,
games in extended form have been harnessed for argumentation in Bi-party
Decision Theory. This opens new problems that we address in this paper: (1) How
can we use Machine Learning (ML) methods to predict utility functions for
different subpopulations of users? and (2) How can we identify for a new user
the best utility function from amongst those that we have learned? To this
extent, we develop two ML methods, EAI and EDS, that leverage information
coming from the users to predict their utilities. EAI is restricted to a fixed
amount of information, whereas EDS can choose the information that best detects
the subpopulations of a user. We evaluate EAI and EDS in a simulation setting
and in a realistic case study concerning healthy eating habits. Results are
promising in both cases, but EDS is more effective at predicting useful utility
functions.
Related papers
- System-2 Recommenders: Disentangling Utility and Engagement in Recommendation Systems via Temporal Point-Processes [80.97898201876592]
We propose a generative model in which past content interactions impact the arrival rates of users based on a self-exciting Hawkes process.
We show analytically that given samples it is possible to disentangle System-1 and System-2 and allow content optimization based on user utility.
arXiv Detail & Related papers (2024-05-29T18:19:37Z) - Counterfactual Reasoning Using Predicted Latent Personality Dimensions for Optimizing Persuasion Outcome [13.731895847081953]
We present a novel approach that tracks a user's latent personality dimensions (LPDs) during ongoing persuasion conversation.
We generate tailored counterfactual utterances based on these LPDs to optimize the overall persuasion outcome.
arXiv Detail & Related papers (2024-04-21T23:03:47Z) - Query-Dependent Prompt Evaluation and Optimization with Offline Inverse
RL [62.824464372594576]
We aim to enhance arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization.
We identify a previously overlooked objective of query dependency in such optimization.
We introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data.
arXiv Detail & Related papers (2023-09-13T01:12:52Z) - Revealing User Familiarity Bias in Task-Oriented Dialogue via Interactive Evaluation [17.41434948048325]
We conduct an interactive user study to unveil how vulnerable TOD systems are against realistic scenarios.
Our study reveals that conversations in open-goal settings lead to catastrophic failures of the system.
We discover a novel "pretending" behavior, in which the system pretends to handle the user requests even though they are beyond the system's capabilities.
arXiv Detail & Related papers (2023-05-23T09:24:53Z) - Learning Action-Effect Dynamics for Hypothetical Vision-Language
Reasoning Task [50.72283841720014]
We propose a novel learning strategy that can improve reasoning about the effects of actions.
We demonstrate the effectiveness of our proposed approach and discuss its advantages over previous baselines in terms of performance, data efficiency, and generalization capability.
arXiv Detail & Related papers (2022-12-07T05:41:58Z) - Evaluating Machine Unlearning via Epistemic Uncertainty [78.27542864367821]
This work presents an evaluation of Machine Unlearning algorithms based on uncertainty.
This is the first definition of a general evaluation of our best knowledge.
arXiv Detail & Related papers (2022-08-23T09:37:31Z) - Let's Go to the Alien Zoo: Introducing an Experimental Framework to
Study Usability of Counterfactual Explanations for Machine Learning [6.883906273999368]
Counterfactual explanations (CFEs) have gained traction as a psychologically grounded approach to generate post-hoc explanations.
We introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework.
As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study.
arXiv Detail & Related papers (2022-05-06T17:57:05Z) - Advances and Challenges in Conversational Recommender Systems: A Survey [133.93908165922804]
We provide a systematic review of the techniques used in current conversational recommender systems (CRSs)
We summarize the key challenges of developing CRSs into five directions.
These research directions involve multiple research fields like information retrieval (IR), natural language processing (NLP), and human-computer interaction (HCI)
arXiv Detail & Related papers (2021-01-23T08:53:15Z) - Explainable Empirical Risk Minimization [0.6299766708197883]
Successful application of machine learning (ML) methods becomes increasingly dependent on their interpretability or explainability.
This paper applies information-theoretic concepts to develop a novel measure for the subjective explainability of predictions delivered by a ML method.
Our main contribution is the explainable empirical risk minimization (EERM) principle of learning a hypothesis that optimally balances between the subjective explainability and risk.
arXiv Detail & Related papers (2020-09-03T07:16:34Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.