How to Disclose? Strategic AI Disclosure in Crowdfunding
- URL: http://arxiv.org/abs/2602.15698v1
- Date: Tue, 17 Feb 2026 16:26:03 GMT
- Title: How to Disclose? Strategic AI Disclosure in Crowdfunding
- Authors: Ning Wang, Chen Liang,
- Abstract summary: We find that mandatory AI disclosure significantly reduces crowdfunding performance.<n>Funds raised decline by 39.8% and backer counts by 23.9% for AI-involved projects.<n>This adverse effect is systematically moderated by disclosure strategy.
- Score: 10.090562206470329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As artificial intelligence (AI) increasingly integrates into crowdfunding practices, strategic disclosure of AI involvement has become critical. Yet, empirical insights into how different disclosure strategies influence investor decisions remain limited. Drawing on signaling theory and Aristotle's rhetorical framework, we examine how mandatory AI disclosure affects crowdfunding performance and how substantive signals (degree of AI involvement) and rhetorical signals (logos/explicitness, ethos/authenticity, pathos/emotional tone) moderate these effects. Leveraging Kickstarter's mandatory AI disclosure policy as a natural experiment and four supplementary online experiments, we find that mandatory AI disclosure significantly reduces crowdfunding performance: funds raised decline by 39.8% and backer counts by 23.9% for AI-involved projects. However, this adverse effect is systematically moderated by disclosure strategy. Greater AI involvement amplifies the negative effects of AI disclosure, while high authenticity and high explicitness mitigate them. Interestingly, excessive positive emotional tone (a strategy creators might intuitively adopt to counteract AI skepticism) backfires and exacerbates negative outcomes. Supplementary randomized experiments identify two underlying mechanisms: perceived creator competence and AI washing concerns. Substantive signals primarily affect competence judgments, whereas rhetorical signals operate through varied pathways: either mediator alone or both in sequence. These findings provide theoretical and practical insights for entrepreneurs, platforms, and policymakers strategically managing AI transparency in high-stakes investment contexts.
Related papers
- When Is Self-Disclosure Optimal? Incentives and Governance of AI-Generated Content [25.691139058468377]
Gen-AI is reshaping content creation on digital platforms by reducing production costs and enabling scalable output of varying quality.<n> platforms have begun adopting disclosure policies that require creators to label AI-generated content.<n>This paper develops a formal model to study the economic implications of such disclosure regimes.
arXiv Detail & Related papers (2026-01-26T16:31:04Z) - AI Credibility Signals Outrank Institutions and Engagement in Shaping News Perception on Social Media [4.197003225775791]
We present a large-scale mixed-design experiment investigating how AI-generated credibility scores affect user perception of political news.<n>Our results reveal that AI feedback significantly moderates partisan bias and institutional distrust, surpassing traditional engagement signals such as likes and shares.
arXiv Detail & Related papers (2025-11-04T08:46:54Z) - "We are not Future-ready": Understanding AI Privacy Risks and Existing Mitigation Strategies from the Perspective of AI Developers in Europe [56.1653658714305]
We interviewed 25 AI developers based in Europe to understand which privacy threats they believe pose the greatest risk to users, developers, and businesses.<n>We find that there is little consensus among AI developers on the relative ranking of privacy risks.<n>While AI developers are aware of proposed mitigation strategies for addressing these risks, they reported minimal real-world adoption.
arXiv Detail & Related papers (2025-10-01T13:51:33Z) - Advancing Science- and Evidence-based AI Policy [163.43609502905707]
This paper tackles the problem of how to optimize the relationship between evidence and policy to address the opportunities and challenges of AI.<n>An increasing number of efforts address this problem by often either (i) contributing research into the risks of AI and their effective mitigation or (ii) advocating for policy to address these risks.
arXiv Detail & Related papers (2025-08-02T23:20:58Z) - Human Decision-making is Susceptible to AI-driven Manipulation [87.24007555151452]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.<n>This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We examine what is known about human wisdom and sketch a vision of its AI counterpart.<n>We argue that AI systems particularly struggle with metacognition.<n>We discuss how wise AI might be benchmarked, trained, and implemented.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - How Performance Pressure Influences AI-Assisted Decision Making [52.997197698288936]
We show how pressure and explainable AI (XAI) techniques interact with AI advice-taking behavior.<n>Our results show complex interaction effects, with different combinations of pressure and XAI techniques either improving or worsening AI advice taking behavior.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.<n>First, it is not sustainable, as, despite efficiency improvements, its compute demands increase faster than model performance.<n>Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - The Impact of Generative Artificial Intelligence on Market Equilibrium: Evidence from a Natural Experiment [19.963531237647103]
Generative artificial intelligence (AI) exhibits the capability to generate creative content akin to human output with greater efficiency and reduced costs.
This paper empirically investigates the impact of generative AI on market equilibrium, in the context of China's leading art outsourcing platform.
Our analysis shows that the advent of generative AI led to a 64% reduction in average prices, yet it simultaneously spurred a 121% increase in order volume and a 56% increase in overall revenue.
arXiv Detail & Related papers (2023-11-13T04:31:53Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - FATE in AI: Towards Algorithmic Inclusivity and Accessibility [0.0]
To prevent algorithmic disparities, fairness, accountability, transparency, and ethics (FATE) in AI are being implemented.
This study examines FATE-related desiderata, particularly transparency and ethics, in areas of the global South that are underserved by AI.
To promote inclusivity, a community-led strategy is proposed to collect and curate representative data for responsible AI design.
arXiv Detail & Related papers (2023-01-03T15:08:10Z) - Acceleration AI Ethics, the Debate between Innovation and Safety, and
Stability AI's Diffusion versus OpenAI's Dall-E [0.0]
This presentation responds by reconfiguring ethics as an innovation accelerator.
The work of ethics is embedded in AI development and application, instead of functioning from outside.
arXiv Detail & Related papers (2022-12-04T14:54:13Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - The Managerial Effects of Algorithmic Fairness Activism [1.942960499551692]
We randomly expose business decision-makers to arguments used in AI fairness activism.
Arguments emphasizing the inescapability of algorithmic bias lead managers to abandon AI for manual review by humans.
Emphasis on status quo comparisons yields opposite effects.
arXiv Detail & Related papers (2020-12-04T04:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.