Closing the Loop: Testing ChatGPT to Generate Model Explanations to
Improve Human Labelling of Sponsored Content on Social Media
- URL: http://arxiv.org/abs/2306.05115v1
- Date: Thu, 8 Jun 2023 11:29:58 GMT
- Title: Closing the Loop: Testing ChatGPT to Generate Model Explanations to
Improve Human Labelling of Sponsored Content on Social Media
- Authors: Thales Bertaglia, Stefan Huber, Catalina Goanta, Gerasimos Spanakis,
Adriana Iamnitchi
- Abstract summary: Regulatory bodies worldwide are intensifying their efforts to ensure transparency in influencer marketing on social media.
The task of automatically detecting sponsored content aims to enable the monitoring and enforcement of such regulations at scale.
We propose using chatGPT to augment the annotation process with phrases identified as relevant features and brief explanations.
- Score: 4.322339935902437
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Regulatory bodies worldwide are intensifying their efforts to ensure
transparency in influencer marketing on social media through instruments like
the Unfair Commercial Practices Directive (UCPD) in the European Union, or
Section 5 of the Federal Trade Commission Act. Yet enforcing these obligations
has proven to be highly problematic due to the sheer scale of the influencer
market. The task of automatically detecting sponsored content aims to enable
the monitoring and enforcement of such regulations at scale. Current research
in this field primarily frames this problem as a machine learning task,
focusing on developing models that achieve high classification performance in
detecting ads. These machine learning tasks rely on human data annotation to
provide ground truth information. However, agreement between annotators is
often low, leading to inconsistent labels that hinder the reliability of
models. To improve annotation accuracy and, thus, the detection of sponsored
content, we propose using chatGPT to augment the annotation process with
phrases identified as relevant features and brief explanations. Our experiments
show that this approach consistently improves inter-annotator agreement and
annotation accuracy. Additionally, our survey of user experience in the
annotation task indicates that the explanations improve the annotators'
confidence and streamline the process. Our proposed methods can ultimately lead
to more transparency and alignment with regulatory requirements in sponsored
content detection.
Related papers
- Evaluating Fairness in Transaction Fraud Models: Fairness Metrics, Bias Audits, and Challenges [3.499319293058353]
Despite extensive research on algorithmic fairness, there is a notable gap in the study of bias in fraud detection models.
These challenges include the need for fairness metrics that account for fraud data's imbalanced nature and the tradeoff between fraud protection and service quality.
We present a comprehensive fairness evaluation of transaction fraud models using public synthetic datasets.
arXiv Detail & Related papers (2024-09-06T16:08:27Z) - LabelAId: Just-in-time AI Interventions for Improving Human Labeling Quality and Domain Knowledge in Crowdsourcing Systems [16.546017147593044]
This paper explores just-in-time AI interventions to enhance both labeling quality and domain-specific knowledge among crowdworkers.
We introduce LabelAId, an advanced inference model combining Programmatic Weak Supervision (PWS) with FT-Transformers to infer label correctness.
We then implemented LabelAId into Project Sidewalk, an open-source crowdsourcing platform for urban accessibility.
arXiv Detail & Related papers (2024-03-14T18:59:10Z) - Improving Task Instructions for Data Annotators: How Clear Rules and Higher Pay Increase Performance in Data Annotation in the AI Economy [0.0]
The global surge in AI applications is transforming industries, leading to displacement and complementation of existing jobs, while also giving rise to new employment opportunities.
Data annotation, encompassing the labelling of images or annotating of texts by human workers, crucially influences the quality of a dataset directly influences the quality of AI models trained on it.
This paper delves into the economics of data annotation, with a specific focus on the impact of task instruction design and monetary incentives on data quality and costs.
arXiv Detail & Related papers (2023-12-22T09:50:57Z) - Automated Claim Matching with Large Language Models: Empowering
Fact-Checkers in the Fight Against Misinformation [11.323961700172175]
FACT-GPT is a framework designed to automate the claim matching phase of fact-checking using Large Language Models.
This framework identifies new social media content that either supports or contradicts claims previously debunked by fact-checkers.
We evaluated FACT-GPT on an extensive dataset of social media content related to public health.
arXiv Detail & Related papers (2023-10-13T16:21:07Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - ADEPT: A DEbiasing PrompT Framework [49.582497203415855]
Finetuning is an applicable approach for debiasing contextualized word embeddings.
discrete prompts with semantic meanings have shown to be effective in debiasing tasks.
We propose ADEPT, a method to debias PLMs using prompt tuning while maintaining the delicate balance between removing biases and ensuring representation ability.
arXiv Detail & Related papers (2022-11-10T08:41:40Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - FLAVA: Find, Localize, Adjust and Verify to Annotate LiDAR-Based Point
Clouds [93.3595555830426]
We propose FLAVA, a systematic approach to minimizing human interaction in the annotation process.
Specifically, we divide the annotation pipeline into four parts: find, localize, adjust and verify.
Our system also greatly reduces the amount of interaction by introducing a light-weight yet effective mechanism to propagate the results.
arXiv Detail & Related papers (2020-11-20T02:22:36Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Improving Classification through Weak Supervision in Context-specific
Conversational Agent Development for Teacher Education [1.215785021723604]
The effort required to develop an educational scenario specific conversational agent is time consuming.
Previous approaches to modeling annotations have relied on labeling thousands of examples and calculating inter-annotator agreement and majority votes.
We propose using a multi-task weak supervision method combined with active learning to address these concerns.
arXiv Detail & Related papers (2020-10-23T23:39:40Z) - Foreseeing the Benefits of Incidental Supervision [83.08441990812636]
This paper studies whether we can, in a single framework, quantify the benefits of various types of incidental signals for a given target task without going through experiments.
We propose a unified PAC-Bayesian motivated informativeness measure, PABI, that characterizes the uncertainty reduction provided by incidental supervision signals.
arXiv Detail & Related papers (2020-06-09T20:59:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.