Selective Response Strategies for GenAI
- URL: http://arxiv.org/abs/2502.00729v1
- Date: Sun, 02 Feb 2025 09:27:02 GMT
- Title: Selective Response Strategies for GenAI
- Authors: Boaz Taitler, Omer Ben-Porat,
- Abstract summary: The rise of Generative AI (GenAI) has significantly impacted human-based forums like Stack Overflow.<n>This creates a negative feedback loop, hindering the development of GenAI systems.<n>We show that selective response can potentially have a compounding effect on the data generation process.
- Score: 6.261444979025644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of Generative AI (GenAI) has significantly impacted human-based forums like Stack Overflow, which are essential for generating high-quality data. This creates a negative feedback loop, hindering the development of GenAI systems, which rely on such data to provide accurate responses. In this paper, we provide a possible remedy: A novel strategy we call selective response. Selective response implies that GenAI could strategically provide inaccurate (or conservative) responses to queries involving emerging topics and novel technologies, thereby driving users to use human-based forums like Stack Overflow. We show that selective response can potentially have a compounding effect on the data generation process, increasing both GenAI's revenue and user welfare in the long term. From an algorithmic perspective, we propose an approximately optimal approach to maximize GenAI's revenue under social welfare constraints. From a regulatory perspective, we derive sufficient and necessary conditions for selective response to improve welfare improvements.
Related papers
- Generate, Discriminate, Evolve: Enhancing Context Faithfulness via Fine-Grained Sentence-Level Self-Evolution [61.80716438091887]
GenDiE (Generate, Discriminate, Evolve) is a novel self-evolving framework that enhances context faithfulness through fine-grained sentence-level optimization.
By treating each sentence in a response as an independent optimization unit, GenDiE effectively addresses the limitations of previous approaches.
Experiments on ASQA (in-domain LFQA) and ConFiQA datasets demonstrate that GenDiE surpasses various baselines in both faithfulness and correctness.
arXiv Detail & Related papers (2025-03-03T16:08:33Z) - Human Misperception of Generative-AI Alignment: A Laboratory Experiment [0.393259574660092]
We study people's perception of generative artificial intelligence (GenAI) alignment in the context of economic decision-making.
We find that people overestimate the degree of alignment between GenAI's choices and human choices.
arXiv Detail & Related papers (2025-02-20T16:32:42Z) - Generative AI Enabled Matching for 6G Multiple Access [51.00960374545361]
We propose a GenAI-enabled matching generation framework to support 6G multiple access.
We show that our framework can generate more effective matching strategies based on given conditions and predefined rewards.
arXiv Detail & Related papers (2024-10-29T13:01:26Z) - "I Am the One and Only, Your Cyber BFF": Understanding the Impact of GenAI Requires Understanding the Impact of Anthropomorphic AI [55.99010491370177]
We argue that we cannot thoroughly map the social impacts of generative AI without mapping the social impacts of anthropomorphic AI.
anthropomorphic AI systems are increasingly prone to generating outputs that are perceived to be human-like.
arXiv Detail & Related papers (2024-10-11T04:57:41Z) - The Influencer Next Door: How Misinformation Creators Use GenAI [1.1650821883155187]
We find that non-experts increasingly use GenAI to remix, repackage, and (re)produce content to meet their personal needs and desires.
We analyze how these understudied emergent uses of GenAI produce new or accelerated misinformation harms.
arXiv Detail & Related papers (2024-05-22T11:40:22Z) - Explainable Generative AI (GenXAI): A Survey, Conceptualization, and Research Agenda [1.8592384822257952]
We elaborate on why XAI has gained importance with the rise of GenAI and its challenges for explainability research.
We also unveil novel and emerging desiderata that explanations should fulfill, covering aspects such as verifiability, interactivity, security, and cost.
arXiv Detail & Related papers (2024-04-15T08:18:16Z) - Prompt Smells: An Omen for Undesirable Generative AI Outputs [4.105236597768038]
We propose two new concepts that will aid the research community in addressing limitations associated with the application of GenAI models.
First, we propose a definition for the "desirability" of GenAI outputs and three factors which are observed to influence it.
Second, drawing inspiration from Martin Fowler's code smells, we propose the concept of "prompt smells" and the adverse effects they are observed to have on the desirability of GenAI outputs.
arXiv Detail & Related papers (2024-01-23T10:10:01Z) - Data Equity: Foundational Concepts for Generative AI [0.0]
GenAI promises immense potential to drive digital and social innovation.
GenAI has the potential to democratize access and usage of technologies.
However, left unchecked, it could deepen inequities.
arXiv Detail & Related papers (2023-10-27T05:19:31Z) - Improving Generalization of Alignment with Human Preferences through
Group Invariant Learning [56.19242260613749]
Reinforcement Learning from Human Feedback (RLHF) enables the generation of responses more aligned with human preferences.
Previous work shows that Reinforcement Learning (RL) often exploits shortcuts to attain high rewards and overlooks challenging samples.
We propose a novel approach that can learn a consistent policy via RL across various data groups or domains.
arXiv Detail & Related papers (2023-10-18T13:54:15Z) - Learning towards Selective Data Augmentation for Dialogue Generation [52.540330534137794]
We argue that not all cases are beneficial for augmentation task, and the cases suitable for augmentation should obey the following two attributes.
We propose a Selective Data Augmentation framework (SDA) for the response generation task.
arXiv Detail & Related papers (2023-03-17T01:26:39Z) - Knowledge Transfer from Answer Ranking to Answer Generation [97.38378660163414]
We propose to train a GenQA model by transferring knowledge from a trained AS2 model.
We also propose to use the AS2 model prediction scores for loss weighting and score-conditioned input/output shaping.
arXiv Detail & Related papers (2022-10-23T21:51:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.