AI Safety Should Prioritize the Future of Work
- URL: http://arxiv.org/abs/2504.13959v1
- Date: Wed, 16 Apr 2025 23:12:30 GMT
- Title: AI Safety Should Prioritize the Future of Work
- Authors: Sanchaita Hazra, Bodhisattwa Prasad Majumder, Tuhin Chakrabarty,
- Abstract summary: Current efforts in AI safety prioritize filtering harmful content, preventing manipulation of human behavior, and eliminating existential risks in cybersecurity or biosecurity.<n>While pressing, this narrow focus overlooks critical human-centric considerations that shape the long-term trajectory of a society.
- Score: 13.076075926681522
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current efforts in AI safety prioritize filtering harmful content, preventing manipulation of human behavior, and eliminating existential risks in cybersecurity or biosecurity. While pressing, this narrow focus overlooks critical human-centric considerations that shape the long-term trajectory of a society. In this position paper, we identify the risks of overlooking the impact of AI on the future of work and recommend comprehensive transition support towards the evolution of meaningful labor with human agency. Through the lens of economic theories, we highlight the intertemporal impacts of AI on human livelihood and the structural changes in labor markets that exacerbate income inequality. Additionally, the closed-source approach of major stakeholders in AI development resembles rent-seeking behavior through exploiting resources, breeding mediocrity in creative labor, and monopolizing innovation. To address this, we argue in favor of a robust international copyright anatomy supported by implementing collective licensing that ensures fair compensation mechanisms for using data to train AI models. We strongly recommend a pro-worker framework of global AI governance to enhance shared prosperity and economic justice while reducing technical debt.
Related papers
- The Economics of p(doom): Scenarios of Existential Risk and Economic Growth in the Age of Transformative AI [0.0]
Key focus is the potential emergence of transformative AI (TAI)<n>Discussed scenarios range from human extinction after a misaligned TAI takes over ("AI doom") to unprecedented economic growth and abundance ("post-scarcity")
arXiv Detail & Related papers (2025-03-10T13:53:39Z) - Fully Autonomous AI Agents Should Not be Developed [58.88624302082713]
This paper argues that fully autonomous AI agents should not be developed.<n>In support of this position, we build from prior scientific literature and current product marketing to delineate different AI agent levels.<n>Our analysis reveals that risks to people increase with the autonomy of a system.
arXiv Detail & Related papers (2025-02-04T19:00:06Z) - Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI [0.0]
As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency.
Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny.
Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship.
arXiv Detail & Related papers (2024-12-19T17:40:58Z) - Follow the money: a startup-based measure of AI exposure across occupations, industries and regions [0.0]
Existing measures of AI occupational exposure focus on AI's theoretical potential to substitute or complement human labour on the basis of technical feasibility.
We introduce the AI Startup Exposure (AISE) index-a novel metric based on occupational descriptions from O*NET and AI applications developed by startups.
Our findings suggest that AI adoption will be gradual and shaped by social factors as much as by the technical feasibility of AI applications.
arXiv Detail & Related papers (2024-12-06T10:25:05Z) - The Dual Imperative: Innovation and Regulation in the AI Era [0.0]
This article addresses the societal costs associated with the lack of regulation in Artificial Intelligence.<n>Over fifty years of AI research, have propelled AI into the mainstream, promising significant economic benefits.<n>The discourse is polarized between accelerationists, advocating for unfettered technological advancement, and doomers, calling for a slowdown to prevent dystopian outcomes.
arXiv Detail & Related papers (2024-05-23T08:26:25Z) - Exploiting the Margin: How Capitalism Fuels AI at the Expense of Minoritized Groups [0.0]
This paper explores the relationship between capitalism, racial injustice, and artificial intelligence (AI)
It argues that AI acts as a contemporary vehicle for age-old forms of exploitation.
The paper promotes an approach that integrates social justice and equity into the core of technological design and policy.
arXiv Detail & Related papers (2024-03-10T22:40:07Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.