Investigating Faithfulness in Large Audio Language Models
- URL: http://arxiv.org/abs/2509.22363v2
- Date: Tue, 14 Oct 2025 16:24:33 GMT
- Title: Investigating Faithfulness in Large Audio Language Models
- Authors: Lovenya Jain, Pooneh Mousavi, Mirco Ravanelli, Cem Subakan,
- Abstract summary: Faithfulness measures whether chain-of-thought representations accurately reflect a model's decision process and can be used as reliable explanations.<n>This paper investigates the faithfulness of CoTs produced by several LALMs by applying targeted interventions.<n>After going through the aforementioned interventions across several datasets and tasks, our experiments suggest that, LALMs generally produce CoTs that appear to be faithful to their underlying decision processes.
- Score: 22.917844547310626
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Faithfulness measures whether chain-of-thought (CoT) representations accurately reflect a model's decision process and can be used as reliable explanations. Prior work has shown that CoTs from text-based LLMs are often unfaithful. This question has not been explored for large audio-language models (LALMs), where faithfulness is critical for safety-sensitive applications. Reasoning in LALMs is also more challenging, as models must first extract relevant clues from audio before reasoning over them. In this paper, we investigate the faithfulness of CoTs produced by several LALMs by applying targeted interventions, including paraphrasing, filler token injection, early answering, and introducing mistakes, on two challenging reasoning datasets: SAKURA and MMAR. After going through the aforementioned interventions across several datasets and tasks, our experiments suggest that, LALMs generally produce CoTs that appear to be faithful to their underlying decision processes.
Related papers
- Reasoning with Confidence: Efficient Verification of LLM Reasoning Steps via Uncertainty Heads [104.9566359759396]
We propose a lightweight alternative for step-level reasoning verification based on data-driven uncertainty scores.<n>Our findings suggest that the internal states of LLMs encode their uncertainty and can serve as reliable signals for reasoning verification.
arXiv Detail & Related papers (2025-11-09T03:38:29Z) - Beyond Prompt-Induced Lies: Investigating LLM Deception on Benign Prompts [79.1081247754018]
Large Language Models (LLMs) are widely deployed in reasoning, planning, and decision-making tasks.<n>We propose a framework based on Contact Searching Questions(CSQ) to quantify the likelihood of deception.
arXiv Detail & Related papers (2025-08-08T14:46:35Z) - Causal Prompting for Implicit Sentiment Analysis with Large Language Models [21.39152516811571]
Implicit Sentiment Analysis (ISA) aims to infer sentiment that is implied rather than explicitly stated.<n>Recent prompting-based methods using Large Language Models (LLMs) have shown promise in ISA.<n>We propose CAPITAL, a causal prompting framework that incorporates front-door adjustment into CoT reasoning.
arXiv Detail & Related papers (2025-07-01T03:01:09Z) - Misaligning Reasoning with Answers -- A Framework for Assessing LLM CoT Robustness [3.9930400744726273]
We design a novel evaluation framework, MATCHA, to investigate the relationship between answer and reasoning.<n>In domains like education and healthcare, reasoning is key for model trustworthiness.<n>Our results show that LLMs exhibit greater vulnerability to input perturbations for multi-step and commonsense tasks than compared to logical tasks.
arXiv Detail & Related papers (2025-05-23T02:42:16Z) - Benchmarking and Confidence Evaluation of LALMs For Temporal Reasoning [30.308743810639758]
Large audio language models (LALMs) have to be evaluated on reasoning related tasks which are different from traditional classification or generation tasks.<n>We benchmark open-source LALMs and observe that they are consistently behind human capabilities on the tasks in the TREA dataset.<n>Our analysis shows that the accuracy and uncertainty metrics are not necessarily correlated and thus, points to a need for wholesome evaluation of LALMs for high-stakes applications.
arXiv Detail & Related papers (2025-05-19T13:46:35Z) - Collaborative Stance Detection via Small-Large Language Model Consistency Verification [11.512736305376654]
Stance detection on social media aims to identify attitudes expressed in tweets towards specific targets.<n>Heavily relying on Large Language Models (LLMs) for stance detection is impractical for real-world social media monitoring systems.<n>We propose textbfunderlineCollaborative Stance Detection via Small-Large Language Model Consistency.
arXiv Detail & Related papers (2025-02-27T10:30:50Z) - Aligning Large Language Models for Faithful Integrity Against Opposing Argument [71.33552795870544]
Large Language Models (LLMs) have demonstrated impressive capabilities in complex reasoning tasks.<n>They can be easily misled by unfaithful arguments during conversations, even when their original statements are correct.<n>We propose a novel framework, named Alignment for Faithful Integrity with Confidence Estimation.
arXiv Detail & Related papers (2025-01-02T16:38:21Z) - Understanding the Relationship between Prompts and Response Uncertainty in Large Language Models [55.332004960574004]
Large language models (LLMs) are widely used in decision-making, but their reliability, especially in critical tasks like healthcare, is not well-established.<n>This paper investigates how the uncertainty of responses generated by LLMs relates to the information provided in the input prompt.<n>We propose a prompt-response concept model that explains how LLMs generate responses and helps understand the relationship between prompts and response uncertainty.
arXiv Detail & Related papers (2024-07-20T11:19:58Z) - Analyzing the Role of Semantic Representations in the Era of Large Language Models [104.18157036880287]
We investigate the role of semantic representations in the era of large language models (LLMs)
We propose an AMR-driven chain-of-thought prompting method, which we call AMRCoT.
We find that it is difficult to predict which input examples AMR may help or hurt on, but errors tend to arise with multi-word expressions.
arXiv Detail & Related papers (2024-05-02T17:32:59Z) - Causal Prompting: Debiasing Large Language Model Prompting based on Front-Door Adjustment [32.12998469814097]
A novel causal prompting method based on front-door adjustment is proposed to effectively mitigate Large Language Models (LLMs) biases.<n> Experimental results show that the proposed causal prompting approach achieves excellent performance across seven natural language processing datasets.
arXiv Detail & Related papers (2024-03-05T07:47:34Z) - FaithLM: Towards Faithful Explanations for Large Language Models [60.45183469474916]
We introduce FaithLM, a model-agnostic framework that evaluates and improves the faithfulness of large language models.<n>We show that FaithLM consistently increases faithfulness and produces explanations more aligned with human rationales than strong self-explanation baselines.
arXiv Detail & Related papers (2024-02-07T09:09:14Z) - The Curious Case of Hallucinatory (Un)answerability: Finding Truths in
the Hidden States of Over-Confident Large Language Models [46.990141872509476]
We study the behavior of large language models (LLMs) when presented with (un)answerable queries.
Our results show strong indications that such models encode the answerability of an input query, with the representation of the first decoded token often being a strong indicator.
arXiv Detail & Related papers (2023-10-18T11:01:09Z) - Shortcut Learning of Large Language Models in Natural Language
Understanding [119.45683008451698]
Large language models (LLMs) have achieved state-of-the-art performance on a series of natural language understanding tasks.
They might rely on dataset bias and artifacts as shortcuts for prediction.
This has significantly affected their generalizability and adversarial robustness.
arXiv Detail & Related papers (2022-08-25T03:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.