Mapping the individual, social, and biospheric impacts of Foundation Models
- URL: http://arxiv.org/abs/2407.17129v1
- Date: Wed, 24 Jul 2024 10:05:40 GMT
- Title: Mapping the individual, social, and biospheric impacts of Foundation Models
- Authors: Andrés Domínguez Hernández, Shyam Krishna, Antonella Maia Perini, Michael Katell, SJ Bennett, Ann Borda, Youmna Hashem, Semeli Hadjiloizou, Sabeehah Mahomed, Smera Jayadeva, Mhairi Aitken, David Leslie,
- Abstract summary: This paper offers a critical framework to account for the social, political, and environmental dimensions of foundation models and generative AI.
We identify 14 categories of risks and harms and map them according to their individual, social, and biospheric impacts.
- Score: 0.39843531413098965
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Responding to the rapid roll-out and large-scale commercialization of foundation models, large language models, and generative AI, an emerging body of work is shedding light on the myriad impacts these technologies are having across society. Such research is expansive, ranging from the production of discriminatory, fake and toxic outputs, and privacy and copyright violations, to the unjust extraction of labor and natural resources. The same has not been the case in some of the most prominent AI governance initiatives in the global north like the UK's AI Safety Summit and the G7's Hiroshima process, which have influenced much of the international dialogue around AI governance. Despite the wealth of cautionary tales and evidence of algorithmic harm, there has been an ongoing over-emphasis within the AI governance discourse on technical matters of safety and global catastrophic or existential risks. This narrowed focus has tended to draw attention away from very pressing social and ethical challenges posed by the current brute-force industrialization of AI applications. To address such a visibility gap between real-world consequences and speculative risks, this paper offers a critical framework to account for the social, political, and environmental dimensions of foundation models and generative AI. We identify 14 categories of risks and harms and map them according to their individual, social, and biospheric impacts. We argue that this novel typology offers an integrative perspective to address the most urgent negative impacts of foundation models and their downstream applications. We conclude with recommendations on how this typology could be used to inform technical and normative interventions to advance responsible AI.
Related papers
- Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - The Social Impact of Generative AI: An Analysis on ChatGPT [0.7401425472034117]
The rapid development of Generative AI models has sparked heated discussions regarding their benefits, limitations, and associated risks.
Generative models hold immense promise across multiple domains, such as healthcare, finance, and education, to cite a few.
This paper adopts a methodology to delve into the societal implications of Generative AI tools, focusing primarily on the case of ChatGPT.
arXiv Detail & Related papers (2024-03-07T17:14:22Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The Global Impact of AI-Artificial Intelligence: Recent Advances and
Future Directions, A Review [0.0]
The article highlights the implications of AI, including its impact on economic, ethical, social, security & privacy, and job displacement aspects.
It discusses the ethical concerns surrounding AI development, including issues of bias, security, and privacy violations.
The article concludes by emphasizing the importance of public engagement and education to promote awareness and understanding of AI's impact on society at large.
arXiv Detail & Related papers (2023-12-22T00:41:21Z) - Survey on AI Ethics: A Socio-technical Perspective [0.9374652839580183]
Ethical concerns associated with AI are multifaceted, including challenging issues of fairness, privacy and data protection, responsibility and accountability, safety and robustness, transparency and explainability, and environmental impact.
This work unifies the current and future ethical concerns of deploying AI into society.
arXiv Detail & Related papers (2023-11-28T21:00:56Z) - A Review of the Ethics of Artificial Intelligence and its Applications
in the United States [0.0]
The paper highlights the impact AI has in every sector of the US economy and the resultant effect on entities spanning businesses, government, academia, and civil society.
Our discussion explores eleven fundamental 'ethical principles' structured as overarching themes.
These encompass Transparency, Justice, Fairness, Equity, Non- Maleficence, Responsibility, Accountability, Privacy, Beneficence, Freedom, Autonomy, Trust, Dignity, Sustainability, and Solidarity.
arXiv Detail & Related papers (2023-10-09T14:29:00Z) - Applications and Societal Implications of Artificial Intelligence in
Manufacturing: A Systematic Review [0.3867363075280544]
The study finds that there is a predominantly optimistic outlook in prior literature regarding AI's impact on firms.
The paper draws analogies to historical cases and other examples to provide a contextual perspective on potential societal effects of industrial AI.
arXiv Detail & Related papers (2023-07-25T07:17:37Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Empowering Local Communities Using Artificial Intelligence [70.17085406202368]
It has become an important topic to explore the impact of AI on society from a people-centered perspective.
Previous works in citizen science have identified methods of using AI to engage the public in research.
This article discusses the challenges of applying AI in Community Citizen Science.
arXiv Detail & Related papers (2021-10-05T12:51:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.