Electoral Programs of German Parties 2021: A Computational Analysis Of
Their Comprehensibility and Likeability Based On SentiArt
- URL: http://arxiv.org/abs/2109.12500v1
- Date: Sun, 26 Sep 2021 05:27:14 GMT
- Title: Electoral Programs of German Parties 2021: A Computational Analysis Of
Their Comprehensibility and Likeability Based On SentiArt
- Authors: Arthur M. Jacobs and Annette Kinder
- Abstract summary: We analyze the electoral programs of six German parties issued before the parliamentary elections of 2021.
Using novel indices of the readability and emotion potential of texts computed via SentiArt, our data shed light on the similarities and differences of the programs.
They reveal that the programs of the SPD and CDU have the best chances to be comprehensible and likeable.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The electoral programs of six German parties issued before the parliamentary
elections of 2021 are analyzed using state-of-the-art computational tools for
quantitative narrative, topic and sentiment analysis. We compare different
methods for computing the textual similarity of the programs, Jaccard Bag
similarity, Latent Semantic Analysis, doc2vec, and sBERT, the representational
and computational complexity increasing from the 1st to the 4th method. A new
similarity measure for entire documents derived from the Fowlkes Mallows Score
is applied to kmeans clustering of sBERT transformed sentences. Using novel
indices of the readability and emotion potential of texts computed via SentiArt
(Jacobs, 2019), our data shed light on the similarities and differences of the
programs regarding their length, main ideas, comprehensibility, likeability,
and semantic complexity. Among others, they reveal that the programs of the SPD
and CDU have the best chances to be comprehensible and likeable -all other
things being equal-, and they raise the important issue of which similarity
measure is optimal for comparing texts such as electoral programs which
necessarily share a lot of words. While such analyses can not replace
qualitative analyses or a deep reading of the texts, they offer predictions
that can be verified in empirical studies and may serve as a motivation for
changing aspects of future electoral programs potentially making them more
comprehensible and/or likeable.
Related papers
- You Shall Know a Tool by the Traces it Leaves: The Predictability of Sentiment Analysis Tools [74.98850427240464]
We show that sentiment analysis tools disagree on the same dataset.
We show that the sentiment tool used for sentiment annotation can even be predicted from its outcome.
arXiv Detail & Related papers (2024-10-18T17:27:38Z) - Predicting Text Preference Via Structured Comparative Reasoning [110.49560164568791]
We introduce SC, a prompting approach that predicts text preferences by generating structured intermediate comparisons.
We select consistent comparisons with a pairwise consistency comparator that ensures each aspect's comparisons clearly distinguish differences between texts.
Our comprehensive evaluations across various NLP tasks, including summarization, retrieval, and automatic rating, demonstrate that SC equips LLMs to achieve state-of-the-art performance in text preference prediction.
arXiv Detail & Related papers (2023-11-14T18:51:38Z) - Rethinking Evaluation Metrics of Open-Vocabulary Segmentaion [78.76867266561537]
The evaluation process still heavily relies on closed-set metrics without considering the similarity between predicted and ground truth categories.
To tackle this issue, we first survey eleven similarity measurements between two categorical words.
We designed novel evaluation metrics, namely Open mIoU, Open AP, and Open PQ, tailored for three open-vocabulary segmentation tasks.
arXiv Detail & Related papers (2023-11-06T18:59:01Z) - Multilingual estimation of political-party positioning: From label
aggregation to long-input Transformers [3.651047982634467]
We implement and compare two approaches to automatic scaling analysis of political-party manifestos.
We find that the task can be efficiently solved by state-of-the-art models, with label aggregation producing the best results.
arXiv Detail & Related papers (2023-10-19T08:34:48Z) - Optimizing text representations to capture (dis)similarity between
political parties [1.2891210250935146]
We look at the problem of modeling pairwise similarities between political parties.
Our research question is what level of structural information is necessary to create robust text representation.
We evaluate our models on the manifestos of German parties for the 2021 federal election.
arXiv Detail & Related papers (2022-10-21T14:24:57Z) - Sentiment Analysis with R: Natural Language Processing for
Semi-Automated Assessments of Qualitative Data [0.0]
This tutorial introduces the basic functions for performing a sentiment analysis with R and explains how text documents can be analysed step by step.
A comparison of two political speeches illustrates a possible use case.
arXiv Detail & Related papers (2022-06-25T13:25:39Z) - TextEssence: A Tool for Interactive Analysis of Semantic Shifts Between
Corpora [14.844685568451833]
We introduce TextEssence, an interactive system designed to enable comparative analysis of corpora using embeddings.
TextEssence includes visual, neighbor-based, and similarity-based modes of embedding analysis in a lightweight, web-based interface.
arXiv Detail & Related papers (2021-03-19T21:26:28Z) - Graph-based Topic Extraction from Vector Embeddings of Text Documents:
Application to a Corpus of News Articles [0.0]
We present an unsupervised framework that brings together powerful vector embeddings from natural language processing with tools from multiscale graph partitioning.
We show the advantages of graph-based clustering through end-to-end comparisons with other popular clustering and topic modelling methods.
This work is showcased through an analysis of a corpus of US news coverage during the presidential election year of 2016.
arXiv Detail & Related papers (2020-10-28T16:20:05Z) - Curious Case of Language Generation Evaluation Metrics: A Cautionary
Tale [52.663117551150954]
A few popular metrics remain as the de facto metrics to evaluate tasks such as image captioning and machine translation.
This is partly due to ease of use, and partly because researchers expect to see them and know how to interpret them.
In this paper, we urge the community for more careful consideration of how they automatically evaluate their models.
arXiv Detail & Related papers (2020-10-26T13:57:20Z) - Predicting the Humorousness of Tweets Using Gaussian Process Preference
Learning [56.18809963342249]
We present a probabilistic approach that learns to rank and rate the humorousness of short texts by exploiting human preference judgments and automatically sourced linguistic annotations.
We report system performance for the campaign's two subtasks, humour detection and funniness score prediction, and discuss some issues arising from the conversion between the numeric scores used in the HAHA@IberLEF 2019 data and the pairwise judgment annotations required for our method.
arXiv Detail & Related papers (2020-08-03T13:05:42Z) - A computational model implementing subjectivity with the 'Room Theory'.
The case of detecting Emotion from Text [68.8204255655161]
This work introduces a new method to consider subjectivity and general context dependency in text analysis.
By using similarity measure between words, we are able to extract the relative relevance of the elements in the benchmark.
This method could be applied to all the cases where evaluating subjectivity is relevant to understand the relative value or meaning of a text.
arXiv Detail & Related papers (2020-05-12T21:26:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.