Using Mobile Data and Deep Models to Assess Auditory Verbal
Hallucinations
- URL: http://arxiv.org/abs/2304.11049v1
- Date: Thu, 20 Apr 2023 15:37:34 GMT
- Title: Using Mobile Data and Deep Models to Assess Auditory Verbal
Hallucinations
- Authors: Shayan Mirjafari, Subigya Nepal, Weichen Wang, Andrew T. Campbell
- Abstract summary: A common form of auditory hallucination is hearing voices in the absence of any speakers.
We study N=435 individuals, who experience hearing voices, to assess auditory verbal hallucination.
- Score: 3.676944894021643
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Hallucination is an apparent perception in the absence of real external
sensory stimuli. An auditory hallucination is a perception of hearing sounds
that are not real. A common form of auditory hallucination is hearing voices in
the absence of any speakers which is known as Auditory Verbal Hallucination
(AVH). AVH is fragments of the mind's creation that mostly occur in people
diagnosed with mental illnesses such as bipolar disorder and schizophrenia.
Assessing the valence of hallucinated voices (i.e., how negative or positive
voices are) can help measure the severity of a mental illness. We study N=435
individuals, who experience hearing voices, to assess auditory verbal
hallucination. Participants report the valence of voices they hear four times a
day for a month through ecological momentary assessments with questions that
have four answering scales from ``not at all'' to ``extremely''. We collect
these self-reports as the valence supervision of AVH events via a mobile
application. Using the application, participants also record audio diaries to
describe the content of hallucinated voices verbally. In addition, we passively
collect mobile sensing data as contextual signals. We then experiment with how
predictive these linguistic and contextual cues from the audio diary and mobile
sensing data are of an auditory verbal hallucination event. Finally, using
transfer learning and data fusion techniques, we train a neural net model that
predicts the valance of AVH with a performance of 54\% top-1 and 72\% top-2 F1
score.
Related papers
- Investigation of Whisper ASR Hallucinations Induced by Non-Speech Audio [15.878350948461646]
We investigate hallucinations of the Whisper ASR model induced by non-speech audio segments present during inference.
By inducting hallucinations with various types of sounds, we show that there exists a set of hallucinations that appear frequently.
We then study hallucinations caused by the augmentation of speech with such sounds.
arXiv Detail & Related papers (2025-01-20T10:14:52Z) - Data-augmented phrase-level alignment for mitigating object hallucination [52.43197107069751]
Multimodal Large Language Models (MLLMs) often generate factually inaccurate information, referred to as hallucination.
We introduce Data-augmented Phrase-level Alignment (DPA), a novel loss which can be applied to instruction-tuned off-the-shelf MLLMs to mitigate hallucinations.
arXiv Detail & Related papers (2024-05-28T23:36:00Z) - Fakes of Varying Shades: How Warning Affects Human Perception and Engagement Regarding LLM Hallucinations [9.740345290187307]
This research aims to understand the human perception of hallucinations by systematically varying the degree of hallucination.
We observed that warning improved the detection of hallucination without significantly affecting the perceived truthfulness of genuine content.
arXiv Detail & Related papers (2024-04-04T18:34:32Z) - A Cause-Effect Look at Alleviating Hallucination of Knowledge-grounded Dialogue Generation [51.53917938874146]
We propose a possible solution for alleviating the hallucination in KGD by exploiting the dialogue-knowledge interaction.
Experimental results of our example implementation show that this method can reduce hallucination without disrupting other dialogue performance.
arXiv Detail & Related papers (2024-04-04T14:45:26Z) - On Large Language Models' Hallucination with Regard to Known Facts [74.96789694959894]
Large language models are successful in answering factoid questions but are also prone to hallucination.
We investigate the phenomenon of LLMs possessing correct answer knowledge yet still hallucinating from the perspective of inference dynamics.
Our study shed light on understanding the reasons for LLMs' hallucinations on their known facts, and more importantly, on accurately predicting when they are hallucinating.
arXiv Detail & Related papers (2024-03-29T06:48:30Z) - Careless Whisper: Speech-to-Text Hallucination Harms [0.5242869847419834]
We evaluate Open AI's Whisper, a state-of-the-art automated speech recognition service.
We find that roughly 1% of audio transcriptions contained entire hallucinated phrases or sentences.
We thematically analyze the Whisper-hallucinated content, finding that 38% of hallucinations include explicit harms.
arXiv Detail & Related papers (2024-02-12T19:35:37Z) - Fine-grained Hallucination Detection and Editing for Language Models [109.56911670376932]
Large language models (LMs) are prone to generate factual errors, which are often called hallucinations.
We introduce a comprehensive taxonomy of hallucinations and argue that hallucinations manifest in diverse forms.
We propose a novel task of automatic fine-grained hallucination detection and construct a new evaluation benchmark, FavaBench.
arXiv Detail & Related papers (2024-01-12T19:02:48Z) - Hallucinations in Neural Automatic Speech Recognition: Identifying
Errors and Hallucinatory Models [11.492702369437785]
Hallucinations are semantically unrelated to the source utterance, yet still fluent and coherent.
We show that commonly used metrics, such as word error rates, cannot differentiate between hallucinatory and non-hallucinatory models.
We devise a framework for identifying hallucinations by analysing their semantic connection with the ground truth and their fluency.
arXiv Detail & Related papers (2024-01-03T06:56:56Z) - HalluciDoctor: Mitigating Hallucinatory Toxicity in Visual Instruction Data [102.56792377624927]
hallucinations inherent in machine-generated data remain under-explored.
We present a novel hallucination detection and elimination framework, HalluciDoctor, based on the cross-checking paradigm.
Our method successfully mitigates 44.6% hallucinations relatively and maintains competitive performance compared to LLaVA.
arXiv Detail & Related papers (2023-11-22T04:52:58Z) - Evaluating Hallucinations in Chinese Large Language Models [65.4771562909392]
We establish a benchmark named HalluQA (Chinese Hallucination Question-Answering) to measure the hallucination phenomenon in Chinese large language models.
We consider two types of hallucinations: imitative falsehoods and factual errors, and we construct adversarial samples based on GLM-130B and ChatGPT.
For evaluation, we design an automated evaluation method using GPT-4 to judge whether a model output is hallucinated.
arXiv Detail & Related papers (2023-10-05T07:57:09Z) - The Curious Case of Hallucinations in Neural Machine Translation [5.3180458405676205]
hallucinations in Neural Machine Translation lie at an extreme end on the spectrum of NMT pathologies.
We consider hallucinations under corpus-level noise (without any source perturbation) and demonstrate that two prominent types of natural hallucinations could be generated and explained through specific corpus-level noise patterns.
We elucidate the phenomenon of hallucination amplification in popular data-generation processes such as Backtranslation and sequence-level Knowledge Distillation.
arXiv Detail & Related papers (2021-04-14T08:09:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.