Misinformation with Legal Consequences (MisLC): A New Task Towards Harnessing Societal Harm of Misinformation
- URL: http://arxiv.org/abs/2410.03829v1
- Date: Fri, 4 Oct 2024 18:00:28 GMT
- Title: Misinformation with Legal Consequences (MisLC): A New Task Towards Harnessing Societal Harm of Misinformation
- Authors: Chu Fei Luo, Radin Shayanfar, Rohan Bhambhoria, Samuel Dahan, Xiaodan Zhu,
- Abstract summary: We take a novel angle to consolidate the definition of misinformation detection using legal issues as a measurement of societal ramifications.
We introduce a new task: Misinformation with Legal Consequence (MisLC), which leverages definitions from a wide range of legal domains.
We advocate a two-step dataset curation approach that utilizes crowd-sourced checkworthiness and expert evaluations of misinformation.
- Score: 24.936670177298584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Misinformation, defined as false or inaccurate information, can result in significant societal harm when it is spread with malicious or even innocuous intent. The rapid online information exchange necessitates advanced detection mechanisms to mitigate misinformation-induced harm. Existing research, however, has predominantly focused on assessing veracity, overlooking the legal implications and social consequences of misinformation. In this work, we take a novel angle to consolidate the definition of misinformation detection using legal issues as a measurement of societal ramifications, aiming to bring interdisciplinary efforts to tackle misinformation and its consequence. We introduce a new task: Misinformation with Legal Consequence (MisLC), which leverages definitions from a wide range of legal domains covering 4 broader legal topics and 11 fine-grained legal issues, including hate speech, election laws, and privacy regulations. For this task, we advocate a two-step dataset curation approach that utilizes crowd-sourced checkworthiness and expert evaluations of misinformation. We provide insights about the MisLC task through empirical evidence, from the problem definition to experiments and expert involvement. While the latest large language models and retrieval-augmented generation are effective baselines for the task, we find they are still far from replicating expert performance.
Related papers
- Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - The Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review on Large Language Models (LLMs) [0.0]
ChatGPT, Large Language Models (LLMs) have received enormous attention in healthcare.
Despite their potential benefits, researchers have underscored various ethical implications.
This work aims to map the ethical landscape surrounding the current stage of deployment of LLMs in medicine and healthcare.
arXiv Detail & Related papers (2024-03-21T15:20:07Z) - Correcting misinformation on social media with a large language model [14.69780455372507]
Real-world misinformation, often multimodal, can be misleading using diverse tactics like conflating correlation with causation.
Such misinformation is severely understudied, challenging to address, and harms various social domains, particularly on social media.
We propose MUSE, an LLM augmented with access to and credibility evaluation of up-to-date information.
arXiv Detail & Related papers (2024-03-17T10:59:09Z) - The Perils & Promises of Fact-checking with Large Language Models [55.869584426820715]
Large Language Models (LLMs) are increasingly trusted to write academic papers, lawsuits, and news articles.
We evaluate the use of LLM agents in fact-checking by having them phrase queries, retrieve contextual data, and make decisions.
Our results show the enhanced prowess of LLMs when equipped with contextual information.
While LLMs show promise in fact-checking, caution is essential due to inconsistent accuracy.
arXiv Detail & Related papers (2023-10-20T14:49:47Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - The COVID-19 Infodemic: Can the Crowd Judge Recent Misinformation
Objectively? [17.288917654501265]
We study whether crowdsourcing is an effective and reliable method to assess statements truthfulness during a pandemic.
We specifically target statements related to the COVID-19 health emergency, that is still ongoing at the time of the study.
In our experiment, crowd workers are asked to assess the truthfulness of statements, as well as to provide evidence for the assessments as a URL and a text justification.
arXiv Detail & Related papers (2020-08-13T05:53:24Z) - Misinformation Has High Perplexity [55.47422012881148]
We propose to leverage the perplexity to debunk false claims in an unsupervised manner.
First, we extract reliable evidence from scientific and news sources according to sentence similarity to the claims.
Second, we prime a language model with the extracted evidence and finally evaluate the correctness of given claims based on the perplexity scores at debunking time.
arXiv Detail & Related papers (2020-06-08T15:13:44Z) - Distinguish Confusing Law Articles for Legal Judgment Prediction [30.083642130015317]
Legal Judgment Prediction (LJP) is the task of automatically predicting a law case's judgment results given a text describing its facts.
We present an end-to-end model, LADAN, to solve the task of LJP.
arXiv Detail & Related papers (2020-04-06T11:09:44Z) - Mining Disinformation and Fake News: Concepts, Methods, and Recent
Advancements [55.33496599723126]
disinformation including fake news has become a global phenomenon due to its explosive growth.
Despite the recent progress in detecting disinformation and fake news, it is still non-trivial due to its complexity, diversity, multi-modality, and costs of fact-checking or annotation.
arXiv Detail & Related papers (2020-01-02T21:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.