An Item Response Theory Framework for Persuasion
- URL: http://arxiv.org/abs/2204.11337v1
- Date: Sun, 24 Apr 2022 19:14:11 GMT
- Title: An Item Response Theory Framework for Persuasion
- Authors: Anastassia Kornilova, Daniel Argyle, Vladimir Eidelman
- Abstract summary: We apply Item Response Theory, popular in education and political science research, to the analysis of argument persuasiveness in language.
We empirically evaluate the model's performance on three datasets, including a novel dataset in the area of political advocacy.
- Score: 3.0938904602244346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we apply Item Response Theory, popular in education and
political science research, to the analysis of argument persuasiveness in
language. We empirically evaluate the model's performance on three datasets,
including a novel dataset in the area of political advocacy. We show the
advantages of separating these components under several style and content
representations, including evaluating the ability of the speaker embeddings
generated by the model to parallel real-world observations about
persuadability.
Related papers
- P^3SUM: Preserving Author's Perspective in News Summarization with Diffusion Language Models [57.571395694391654]
We find that existing approaches alter the political opinions and stances of news articles in more than 50% of summaries.
We propose P3SUM, a diffusion model-based summarization approach controlled by political perspective classifiers.
Experiments on three news summarization datasets demonstrate that P3SUM outperforms state-of-the-art summarization systems.
arXiv Detail & Related papers (2023-11-16T10:14:28Z) - "We Demand Justice!": Towards Social Context Grounding of Political Texts [19.58924256275583]
Social media discourse frequently consists of'seemingly similar language used by opposing sides of the political spectrum'
This paper defines the context required to fully understand such ambiguous statements in a computational setting.
We propose two challenging datasets that require an understanding of the real-world context of the text.
arXiv Detail & Related papers (2023-11-15T16:53:35Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - Learning Disentangled Speech Representations [0.45060992929802207]
Disentangled representation learning from speech remains limited despite its importance in many application domains.
Key challenge is the lack of speech datasets with known generative factors to evaluate methods.
This paper proposes SynSpeech: a novel synthetic speech dataset with ground truth factors enabling research on disentangling speech representations.
arXiv Detail & Related papers (2023-11-04T04:54:17Z) - Chain-of-Factors Paper-Reviewer Matching [32.86512592730291]
We propose a unified model for paper-reviewer matching that jointly considers semantic, topic, and citation factors.
We demonstrate the effectiveness of our proposed Chain-of-Factors model in comparison with state-of-the-art paper-reviewer matching methods and scientific pre-trained language models.
arXiv Detail & Related papers (2023-10-23T01:29:18Z) - Multi-Dimensional Evaluation of Text Summarization with In-Context
Learning [79.02280189976562]
In this paper, we study the efficacy of large language models as multi-dimensional evaluators using in-context learning.
Our experiments show that in-context learning-based evaluators are competitive with learned evaluation frameworks for the task of text summarization.
We then analyze the effects of factors such as the selection and number of in-context examples on performance.
arXiv Detail & Related papers (2023-06-01T23:27:49Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Revise and Resubmit: An Intertextual Model of Text-based Collaboration
in Peer Review [52.359007622096684]
Peer review is a key component of the publishing process in most fields of science.
Existing NLP studies focus on the analysis of individual texts.
editorial assistance often requires modeling interactions between pairs of texts.
arXiv Detail & Related papers (2022-04-22T16:39:38Z) - Exploring Discourse Structures for Argument Impact Classification [48.909640432326654]
This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.
We propose DisCOC to inject and fuse the sentence-level structural information with contextualized features derived from large-scale language models.
arXiv Detail & Related papers (2021-06-02T06:49:19Z) - The Role of Pragmatic and Discourse Context in Determining Argument
Impact [39.70446357000737]
This paper presents a new dataset to initiate the study of this aspect of argumentation.
It consists of a diverse collection of arguments covering 741 controversial topics and comprising over 47,000 claims.
We propose predictive models that incorporate the pragmatic and discourse context of argumentative claims and show that they outperform models that rely on claim-specific linguistic features for predicting the perceived impact of individual claims within a particular line of argument.
arXiv Detail & Related papers (2020-04-06T23:00:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.