Detecting Religious Language in Climate Discourse
- URL: http://arxiv.org/abs/2510.23395v1
- Date: Mon, 27 Oct 2025 14:54:51 GMT
- Title: Detecting Religious Language in Climate Discourse
- Authors: Evy Beijen, Pien Pieterse, Yusuf Çelik, Willem Th. van Peursen, Sandjai Bhulai, Meike Morren,
- Abstract summary: This paper investigates how explicit and implicit forms of religious language appear in climate-related texts produced by secular and religious nongovernmental organizations (NGOs)<n>We introduce a dual methodological approach: a rule-based model using a hierarchical tree of religious terms derived from ecotheology literature, and large language models (LLMs) operating in a zero-shot setting.<n>Using a dataset of more than 880,000 sentences, we compare how these methods detect religious language and analyze points of agreement and divergence.
- Score: 1.1707176242280342
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Religious language continues to permeate contemporary discourse, even in ostensibly secular domains such as environmental activism and climate change debates. This paper investigates how explicit and implicit forms of religious language appear in climate-related texts produced by secular and religious nongovernmental organizations (NGOs). We introduce a dual methodological approach: a rule-based model using a hierarchical tree of religious terms derived from ecotheology literature, and large language models (LLMs) operating in a zero-shot setting. Using a dataset of more than 880,000 sentences, we compare how these methods detect religious language and analyze points of agreement and divergence. The results show that the rule-based method consistently labels more sentences as religious than LLMs. These findings highlight not only the methodological challenges of computationally detecting religious language but also the broader tension over whether religious language should be defined by vocabulary alone or by contextual meaning. This study contributes to digital methods in religious studies by demonstrating both the potential and the limitations of approaches for analyzing how the sacred persists in climate discourse.
Related papers
- Mechanistic Interpretability with SAEs: Probing Religion, Violence, and Geography in Large Language Models [0.0]
This paper explores how religion is internally represented in large language models (LLMs)<n>We measure overlap between religion- and violence-related prompts and probe semantic patterns in activation contexts.<n>While all five religions show comparable internal cohesion, Islam is more frequently linked to features associated with violent language.
arXiv Detail & Related papers (2025-09-22T12:09:21Z) - Religious Bias Landscape in Language and Text-to-Image Models: Analysis, Detection, and Debiasing Strategies [16.177734242454193]
The widespread adoption of language models highlights the need for critical examinations of their inherent biases.<n>This study systematically investigates religious bias in both language models and text-to-image generation models.
arXiv Detail & Related papers (2025-01-14T21:10:08Z) - Computational Analysis of Character Development in Holocaust Testimonies [11.044534687337219]
This work presents a computational approach to analyze character development along the narrative timeline.<n>We consider transcripts of Holocaust survivor testimonies as a test case, each telling the story of an individual in first-person terms.<n>We focus on the survivor's religious trajectory, examining the evolution of their disposition toward religious belief and practice.
arXiv Detail & Related papers (2024-12-22T15:20:53Z) - Are Language Models Agnostic to Linguistically Grounded Perturbations? A Case Study of Indic Languages [47.45957604683302]
We study whether pre-trained language models are agnostic to linguistically grounded attacks or not.<n>Our findings reveal that although PLMs are susceptible to linguistic perturbations, when compared to non-linguistic attacks, PLMs exhibit a slightly lower susceptibility to linguistic attacks.
arXiv Detail & Related papers (2024-12-14T12:10:38Z) - Critical biblical studies via word frequency analysis: unveiling text authorship [7.2762881851201255]
We aim to differentiate between three distinct authors across numerous chapters spanning the first nine books of the Bible.
Our analysis indicates that the first two authors (D and DtrH) are much more closely related compared to P, a fact that aligns with expert assessments.
arXiv Detail & Related papers (2024-10-24T22:08:38Z) - Modeling the Sacred: Considerations when Using Religious Texts in Natural Language Processing [1.7794383050238662]
Religious texts are expressions of culturally important values.<n>Machine learned models have a propensity to reproduce cultural values encoded in their training data.<n>This paper argues that NLP's use of such texts raises considerations that go beyond model biases.
arXiv Detail & Related papers (2024-04-23T04:47:22Z) - BabySLM: language-acquisition-friendly benchmark of self-supervised
spoken language models [56.93604813379634]
Self-supervised techniques for learning speech representations have been shown to develop linguistic competence from exposure to speech without the need for human labels.
We propose a language-acquisition-friendly benchmark to probe spoken language models at the lexical and syntactic levels.
We highlight two exciting challenges that need to be addressed for further progress: bridging the gap between text and speech and between clean speech and in-the-wild speech.
arXiv Detail & Related papers (2023-06-02T12:54:38Z) - Natural Language Decompositions of Implicit Content Enable Better Text Representations [52.992875653864076]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.<n>We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.<n>Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z) - A Call for More Rigor in Unsupervised Cross-lingual Learning [76.6545568416577]
An existing rationale for such research is based on the lack of parallel data for many of the world's languages.
We argue that a scenario without any parallel data and abundant monolingual data is unrealistic in practice.
arXiv Detail & Related papers (2020-04-30T17:06:23Z) - On the Language Neutrality of Pre-trained Multilingual Representations [70.93503607755055]
We investigate the language-neutrality of multilingual contextual embeddings directly and with respect to lexical semantics.
Our results show that contextual embeddings are more language-neutral and, in general, more informative than aligned static word-type embeddings.
We show how to reach state-of-the-art accuracy on language identification and match the performance of statistical methods for word alignment of parallel sentences.
arXiv Detail & Related papers (2020-04-09T19:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.