A model to support collective reasoning: Formalization, analysis and
computational assessment
- URL: http://arxiv.org/abs/2007.06850v1
- Date: Tue, 14 Jul 2020 06:55:32 GMT
- Title: A model to support collective reasoning: Formalization, analysis and
computational assessment
- Authors: Jordi Ganzer, Natalia Criado, Maite Lopez-Sanchez, Simon Parsons, Juan
A. Rodriguez-Aguilar
- Abstract summary: We propose a new model to represent human debates and methods to obtain collective conclusions from them.
This model overcomes drawbacks of existing approaches by allowing users to introduce new pieces of information into the discussion.
We show that aggregated opinions can be coherent even if there is a lack of consensus.
- Score: 1.126958266688732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by e-participation systems, in this paper we propose a new model to
represent human debates and methods to obtain collective conclusions from them.
This model overcomes drawbacks of existing approaches by allowing users to
introduce new pieces of information into the discussion, to relate them to
existing pieces, and also to express their opinion on the pieces proposed by
other users. In addition, our model does not assume that users' opinions are
rational in order to extract information from it, an assumption that
significantly limits current approaches. Instead, we define a weaker notion of
rationality that characterises coherent opinions, and we consider different
scenarios based on the coherence of individual opinions and the level of
consensus that users have on the debate structure. Considering these two
factors, we analyse the outcomes of different opinion aggregation functions
that compute a collective decision based on the individual opinions and the
debate structure. In particular, we demonstrate that aggregated opinions can be
coherent even if there is a lack of consensus and individual opinions are not
coherent. We conclude our analysis with a computational evaluation
demonstrating that collective opinions can be computed efficiently for
real-sized debates.
Related papers
- A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Fostering User Engagement in the Critical Reflection of Arguments [3.26297440422721]
We propose a system that engages in a deliberative dialogue with a human.
We enable the system to intervene if the user is too focused on their pre-existing opinion.
We report on a user study with 58 participants to test our model and the effect of the intervention mechanism.
arXiv Detail & Related papers (2023-08-17T15:48:23Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - On the Complexity of Adversarial Decision Making [101.14158787665252]
We show that the Decision-Estimation Coefficient is necessary and sufficient to obtain low regret for adversarial decision making.
We provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures.
arXiv Detail & Related papers (2022-06-27T06:20:37Z) - Evaluating Bayesian Model Visualisations [0.39845810840390733]
Probabilistic models inform an increasingly broad range of business and policy decisions ultimately made by people.
Recent algorithmic, computational, and software framework development progress facilitate the proliferation of Bayesian probabilistic models.
While they can empower decision makers to explore complex queries and to perform what-if-style conditioning in theory, suitable visualisations and interactive tools are needed to maximise users' comprehension and rational decision making under uncertainty.
arXiv Detail & Related papers (2022-01-10T19:15:39Z) - Aspect-Controllable Opinion Summarization [58.5308638148329]
We propose an approach that allows the generation of customized summaries based on aspect queries.
Using a review corpus, we create a synthetic training dataset of (review, summary) pairs enriched with aspect controllers.
We fine-tune a pretrained model using our synthetic dataset and generate aspect-specific summaries by modifying the aspect controllers.
arXiv Detail & Related papers (2021-09-07T16:09:17Z) - Helping users discover perspectives: Enhancing opinion mining with joint
topic models [5.2424255020469595]
This paper explores how opinion mining can be enhanced with joint topic modeling.
We evaluate four joint topic models (TAM, JST, VODUM, and LAM) in a user study assessing human understandability of the extracted perspectives.
arXiv Detail & Related papers (2020-10-23T16:13:06Z) - Evaluating Interactive Summarization: an Expansion-Based Framework [97.0077722128397]
We develop an end-to-end evaluation framework for interactive summarization.
Our framework includes a procedure of collecting real user sessions and evaluation measures relying on standards.
All of our solutions are intended to be released publicly as a benchmark.
arXiv Detail & Related papers (2020-09-17T15:48:13Z) - Explaining reputation assessments [6.87724532311602]
We propose an approach to explain the rationale behind assessments from quantitative reputation models.
Our approach adapts, extends and combines existing approaches for explaining decisions made using multi-attribute decision models.
arXiv Detail & Related papers (2020-06-15T23:19:35Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.