Capturing Opinion Shifts in Deliberative Discourse through Frequency-based Quantum deep learning methods
- URL: http://arxiv.org/abs/2509.22603v1
- Date: Fri, 26 Sep 2025 17:23:55 GMT
- Title: Capturing Opinion Shifts in Deliberative Discourse through Frequency-based Quantum deep learning methods
- Authors: Rakesh Thakur, Harsh Chaturvedi, Ruqayya Shah, Janvi Chauhan, Ayush Sharma,
- Abstract summary: Deliberation plays a crucial role in shaping outcomes by weighing diverse perspectives before reaching decisions.<n>With recent advancements in Natural Language Processing, it has become possible to computationally model deliberation.<n>We present a comparative analysis of multiple NLP techniques to evaluate how effectively models interpret deliberative discourse.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deliberation plays a crucial role in shaping outcomes by weighing diverse perspectives before reaching decisions. With recent advancements in Natural Language Processing, it has become possible to computationally model deliberation by analyzing opinion shifts and predicting potential outcomes under varying scenarios. In this study, we present a comparative analysis of multiple NLP techniques to evaluate how effectively models interpret deliberative discourse and produce meaningful insights. Opinions from individuals of varied backgrounds were collected to construct a self-sourced dataset that reflects diverse viewpoints. Deliberation was simulated using product presentations enriched with striking facts, which often prompted measurable shifts in audience opinions. We have given comparative analysis between two models namely Frequency-Based Discourse Modulation and Quantum-Deliberation Framework which outperform the existing state of art models. The findings highlight practical applications in public policy-making, debate evaluation, decision-support frameworks, and large-scale social media opinion mining.
Related papers
- Reasoning as State Transition: A Representational Analysis of Reasoning Evolution in Large Language Models [50.39102836928242]
We introduce a representational perspective to investigate the dynamics of the model's internal states.<n>We discover that post-training yields only limited improvement in static initial representation quality.
arXiv Detail & Related papers (2026-01-31T15:23:33Z) - Clarity: The Flexibility-Interpretability Trade-Off in Sparsity-aware Concept Bottleneck Models [12.322360020814516]
Vision-Language Models (VLMs) are often treated as black-boxes, with limited or non-existent investigation of their decision making process.<n>We introduce the notion of clarity, a measure, capturing the interplay between the downstream performance and the sparsity and precision of the concept representation.<n>Our experiments reveal a critical trade-off between flexibility and interpretability, under which a given method can exhibit markedly different behaviors even at comparable performance levels.
arXiv Detail & Related papers (2026-01-29T16:28:55Z) - Towards Theoretical Understanding of Transformer Test-Time Computing: Investigation on In-Context Linear Regression [16.51420987738846]
Using more test-time computation during language model inference, such as generating more intermediate thoughts or sampling multiple candidate answers, has proven effective.<n>This paper takes an initial step toward bridging the gap between practical language model inference and theoretical transformer analysis.
arXiv Detail & Related papers (2025-08-11T03:05:36Z) - From Thinking to Output: Chain-of-Thought and Text Generation Characteristics in Reasoning Language Models [10.38327947136263]
This paper proposes a novel framework for analyzing the reasoning characteristics of four cutting-edge large reasoning models.<n>A diverse dataset consists of real-world scenario-based questions covering logical deduction, causal inference, and multi-step problem-solving.<n>The research results uncover various patterns of how these models balance exploration and exploitation, deal with problems, and reach conclusions.
arXiv Detail & Related papers (2025-06-20T14:02:16Z) - Interpreting Social Bias in LVLMs via Information Flow Analysis and Multi-Round Dialogue Evaluation [1.7997395646080083]
Large Vision Language Models (LVLMs) have achieved remarkable progress in multimodal tasks, yet they also exhibit notable social biases.<n>We propose an explanatory framework that combines information flow analysis with multi-round dialogue evaluation.<n>Experiments reveal that LVLMs exhibit systematic disparities in information usage when processing images of different demographic groups.
arXiv Detail & Related papers (2025-05-27T12:28:44Z) - Wait, that's not an option: LLMs Robustness with Incorrect Multiple-Choice Options [2.1184929769291294]
This work introduces a novel framework for evaluating LLMs' capacity to balance instruction-following with critical reasoning.<n>We show that post-training aligned models often default to selecting invalid options, while base models exhibit improved refusal capabilities that scale with model size.<n>We additionally conduct a parallel human study showing similar instruction-following biases, with implications for how these biases may propagate through human feedback datasets used in alignment.
arXiv Detail & Related papers (2024-08-27T19:27:43Z) - Evaluating Interventional Reasoning Capabilities of Large Language Models [58.52919374786108]
Large language models (LLMs) are used to automate decision-making tasks.<n>In this paper, we evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention.<n>We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types.<n>These benchmarks allow us to isolate the ability of LLMs to accurately predict changes resulting from their ability to memorize facts or find other shortcuts.
arXiv Detail & Related papers (2024-04-08T14:15:56Z) - Reasoning Abilities of Large Language Models: In-Depth Analysis on the Abstraction and Reasoning Corpus [4.569421189811511]
We introduce a novel approach to evaluate the inference and contextual understanding abilities of Large Language Models (LLMs)
We focus on three key components from the Language of Thought Hypothesis (LoTH): Logical Coherence, Compositionality, and Productivity.
Our experiments reveal that while LLMs demonstrate some inference capabilities, they still significantly lag behind human-level reasoning in these three aspects.
arXiv Detail & Related papers (2024-03-18T13:50:50Z) - SAIE Framework: Support Alone Isn't Enough -- Advancing LLM Training
with Adversarial Remarks [47.609417223514605]
This work introduces the SAIE framework, which facilitates supportive and adversarial discussions between learner and partner models.
Our empirical evaluation shows that models fine-tuned with the SAIE framework outperform those trained with conventional fine-tuning approaches.
arXiv Detail & Related papers (2023-11-14T12:12:25Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - On the Faithfulness Measurements for Model Interpretations [100.2730234575114]
Post-hoc interpretations aim to uncover how natural language processing (NLP) models make predictions.
To tackle these issues, we start with three criteria: the removal-based criterion, the sensitivity of interpretations, and the stability of interpretations.
Motivated by the desideratum of these faithfulness notions, we introduce a new class of interpretation methods that adopt techniques from the adversarial domain.
arXiv Detail & Related papers (2021-04-18T09:19:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.