Why They Disagree: Decoding Differences in Opinions about AI Risk on the Lex Fridman Podcast
- URL: http://arxiv.org/abs/2512.06350v1
- Date: Sat, 06 Dec 2025 08:48:30 GMT
- Title: Why They Disagree: Decoding Differences in Opinions about AI Risk on the Lex Fridman Podcast
- Authors: Nghi Truong, Phanish Puranam, Özgecan Koçak,
- Abstract summary: This paper analyzes contemporary debates about AI risk.<n>We find that differences in perspectives about existential risk ("X-risk") arise from differences in causal premises about design vs. emergence in complex systems.<n>Disagreements about these two forms of AI risk appear to share two properties: neither involves significant disagreements on moral values and both can be described in terms of differing views on the extent of boundedness of human rationality.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The emergence of transformative technologies often surfaces deep societal divisions, nowhere more evident than in contemporary debates about artificial intelligence (AI). A striking feature of these divisions is that they persist despite shared interests in ensuring that AI benefits humanity and avoiding catastrophic outcomes. This paper analyzes contemporary debates about AI risk, parsing the differences between the "doomer" and "boomer" perspectives into definitional, factual, causal, and moral premises to identify key points of contention. We find that differences in perspectives about existential risk ("X-risk") arise fundamentally from differences in causal premises about design vs. emergence in complex systems, while differences in perspectives about employment risks ("E-risks") pertain to different causal premises about the applicability of past theories (evolution) vs their inapplicability (revolution). Disagreements about these two forms of AI risk appear to share two properties: neither involves significant disagreements on moral values and both can be described in terms of differing views on the extent of boundedness of human rationality. Our approach to analyzing reasoning chains at scale, using an ensemble of LLMs to parse textual data, can be applied to identify key points of contention in debates about risk to the public in any arena.
Related papers
- Realist and Pluralist Conceptions of Intelligence and Their Implications on AI Research [33.46616462561506]
We argue that current AI research operates on a spectrum between two different underlying conceptions of intelligence.<n>These conceptions are Intelligence Realism and Intelligence Pluralism.<n>We argue that making explicit these underlying assumptions can contribute to a clearer understanding of disagreements in AI research.
arXiv Detail & Related papers (2025-11-19T09:48:07Z) - The Stories We Govern By: AI, Risk, and the Power of Imaginaries [0.0]
This paper examines how competing sociotechnical imaginaries of artificial intelligence (AI) risk shape governance decisions and regulatory constraints.<n>We analyse three dominant narrative groups: existential risk proponents, who emphasise catastrophic AGI scenarios; accelerationists, who portray AI as a transformative force to be unleashed; and critical AI scholars, who foreground present-day harms rooted in systemic inequality.<n>Our findings reveal how these narratives embed distinct assumptions about risk and have the potential to progress into policy-making processes by narrowing the space for alternative governance approaches.
arXiv Detail & Related papers (2025-08-15T09:57:56Z) - Perception Gaps in Risk, Benefit, and Value Between Experts and Public Challenge Socially Accepted AI [0.0]
This study examines how the general public and academic AI experts perceive AI's capabilities and impact.<n>The scenarios span domains such as sustainability, healthcare, job performance, societal inequality, art, and warfare.<n>Experts consistently anticipate higher probabilities, perceive lower risks, report greater benefits, and express more positive sentiment toward AI compared to the non-experts.
arXiv Detail & Related papers (2024-12-02T12:51:45Z) - AI Risk Management Should Incorporate Both Safety and Security [185.68738503122114]
We argue that stakeholders in AI risk management should be aware of the nuances, synergies, and interplay between safety and security.
We introduce a unified reference framework to clarify the differences and interplay between AI safety and AI security.
arXiv Detail & Related papers (2024-05-29T21:00:47Z) - On the Societal Impact of Open Foundation Models [93.67389739906561]
We focus on open foundation models, defined here as those with broadly available model weights.
We identify five distinctive properties of open foundation models that lead to both their benefits and risks.
arXiv Detail & Related papers (2024-02-27T16:49:53Z) - Two Types of AI Existential Risk: Decisive and Accumulative [3.5051464966389116]
This paper contrasts the conventional "decisive AI x-risk hypothesis" with an "accumulative AI x-risk hypothesis"<n>It argues that the accumulative view can reconcile seemingly incompatible perspectives on AI risks.
arXiv Detail & Related papers (2024-01-15T17:06:02Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Arguments about Highly Reliable Agent Designs as a Useful Path to
Artificial Intelligence Safety [0.0]
Highly Reliable Agent Designs (HRAD) is one of the most controversial and ambitious approaches.
We have titled the arguments (1) incidental utility, (2) deconfusion, (3) precise specification, and (4) prediction.
We have explained the assumptions and claims based on a review of published and informal literature, along with experts who have stated positions on the topic.
arXiv Detail & Related papers (2022-01-09T07:42:37Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.