When and Why is Persuasion Hard? A Computational Complexity Result
- URL: http://arxiv.org/abs/2408.07923v1
- Date: Thu, 15 Aug 2024 04:22:46 GMT
- Title: When and Why is Persuasion Hard? A Computational Complexity Result
- Authors: Zachary Wojtowicz,
- Abstract summary: This paper places human and AI agents on a common conceptual footing by formalizing informational persuasion as a mathematical decision problem.
A novel proof establishes that persuasive messages are challenging to discover (NP-Hard) but easy to adopt if supplied by others.
This asymmetry helps explain why people are susceptible to persuasion, even in contexts where all relevant information is publicly available.
- Score: 0.03626013617212666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As generative foundation models improve, they also tend to become more persuasive, raising concerns that AI automation will enable governments, firms, and other actors to manipulate beliefs with unprecedented scale and effectiveness at virtually no cost. The full economic and social ramifications of this trend have been difficult to foresee, however, given that we currently lack a complete theoretical understanding of why persuasion is costly for human labor to produce in the first place. This paper places human and AI agents on a common conceptual footing by formalizing informational persuasion as a mathematical decision problem and characterizing its computational complexity. A novel proof establishes that persuasive messages are challenging to discover (NP-Hard) but easy to adopt if supplied by others (NP). This asymmetry helps explain why people are susceptible to persuasion, even in contexts where all relevant information is publicly available. The result also illuminates why litigation, strategic communication, and other persuasion-oriented activities have historically been so human capital intensive, and it provides a new theoretical basis for studying how AI will impact various industries.
Related papers
- Raising the Stakes: Performance Pressure Improves AI-Assisted Decision Making [57.53469908423318]
We show the effects of performance pressure on AI advice reliance when laypeople complete a common AI-assisted task.
We find that when the stakes are high, people use AI advice more appropriately than when stakes are lower, regardless of the presence of an AI explanation.
arXiv Detail & Related papers (2024-10-21T22:39:52Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups [0.0]
This paper explores the relationship between capitalism, racial injustice, and artificial intelligence (AI)
It argues that AI acts as a contemporary vehicle for age-old forms of exploitation.
The paper promotes an approach that integrates social justice and equity into the core of technological design and policy.
arXiv Detail & Related papers (2024-03-10T22:40:07Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Explanations Can Reduce Overreliance on AI Systems During
Decision-Making [12.652229245306671]
We show that people strategically choose whether or not to engage with an AI explanation, demonstrating that there are scenarios where AI explanations reduce overreliance.
We manipulate the costs and benefits in a maze task, where participants collaborate with a simulated AI to find the exit of a maze.
Our results suggest that some of the null effects found in literature could be due in part to the explanation not sufficiently reducing the costs of verifying the AI's prediction.
arXiv Detail & Related papers (2022-12-13T18:59:31Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Levels of explainable artificial intelligence for human-aligned
conversational explanations [0.6571063542099524]
People are affected by autonomous decisions every day and need to understand the decision-making process to accept the outcomes.
This paper aims to define levels of explanation and describe how they can be integrated to create a human-aligned conversational explanation system.
arXiv Detail & Related papers (2021-07-07T12:19:16Z) - Towards a framework for understanding societal and ethical implications
of Artificial Intelligence [2.28438857884398]
The objective of this paper is to identify the main societal and ethical challenges implied by a massive uptake of AI.
We have surveyed the literature for the most common challenges and classified them in seven groups: 1) Non-desired effects, 2) Liability, 3) Unknown consequences, 4) Relation people-robots, 5) Concentration of power and wealth, 6) Intentional bad uses, and 7) AI for weapons and warfare.
arXiv Detail & Related papers (2020-01-03T17:55:15Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.