Whose Side are Ethics Codes On? Power, Responsibility and the Social
Good
- URL: http://arxiv.org/abs/2002.01559v1
- Date: Tue, 4 Feb 2020 22:05:09 GMT
- Title: Whose Side are Ethics Codes On? Power, Responsibility and the Social
Good
- Authors: Anne L. Washington, Rachel S. Kuo
- Abstract summary: We argue that ethics codes that elevate consumers may simultaneously subordinate the needs of vulnerable populations.
We introduce the concept of digital differential vulnerability to explain disproportionate exposures to harm within data technology.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The moral authority of ethics codes stems from an assumption that they serve
a unified society, yet this ignores the political aspects of any shared
resource. The sociologist Howard S. Becker challenged researchers to clarify
their power and responsibility in the classic essay: Whose Side Are We On.
Building on Becker's hierarchy of credibility, we report on a critical
discourse analysis of data ethics codes and emerging conceptualizations of
beneficence, or the "social good", of data technology. The analysis revealed
that ethics codes from corporations and professional associations conflated
consumers with society and were largely silent on agency. Interviews with
community organizers about social change in the digital era supplement the
analysis, surfacing the limits of technical solutions to concerns of
marginalized communities. Given evidence that highlights the gulf between the
documents and lived experiences, we argue that ethics codes that elevate
consumers may simultaneously subordinate the needs of vulnerable populations.
Understanding contested digital resources is central to the emerging field of
public interest technology. We introduce the concept of digital differential
vulnerability to explain disproportionate exposures to harm within data
technology and suggest recommendations for future ethics codes.
Related papers
- Ethical Challenges in Computer Vision: Ensuring Privacy and Mitigating Bias in Publicly Available Datasets [0.0]
This paper aims to shed light on the ethical problems of creating and deploying computer vision tech.
Computer vision has become a vital tool in many industries, including medical care, security systems, and trade.
arXiv Detail & Related papers (2024-08-31T00:59:29Z) - Ethical-Lens: Curbing Malicious Usages of Open-Source Text-to-Image Models [51.69735366140249]
We introduce Ethical-Lens, a framework designed to facilitate the value-aligned usage of text-to-image tools.
Ethical-Lens ensures value alignment in text-to-image models across toxicity and bias dimensions.
Our experiments reveal that Ethical-Lens enhances alignment capabilities to levels comparable with or superior to commercial models.
arXiv Detail & Related papers (2024-04-18T11:38:25Z) - Eagle: Ethical Dataset Given from Real Interactions [74.7319697510621]
We create datasets extracted from real interactions between ChatGPT and users that exhibit social biases, toxicity, and immoral problems.
Our experiments show that Eagle captures complementary aspects, not covered by existing datasets proposed for evaluation and mitigation of such ethical challenges.
arXiv Detail & Related papers (2024-02-22T03:46:02Z) - A Critical Examination of the Ethics of AI-Mediated Peer Review [0.0]
Recent advancements in artificial intelligence (AI) systems offer promise and peril for scholarly peer review.
Human peer review systems are also fraught with related problems, such as biases, abuses, and a lack of transparency.
The legitimacy of AI-driven peer review hinges on the alignment with the scientific ethos.
arXiv Detail & Related papers (2023-09-02T18:14:10Z) - Rethinking Machine Ethics -- Can LLMs Perform Moral Reasoning through the Lens of Moral Theories? [78.3738172874685]
Making moral judgments is an essential step toward developing ethical AI systems.
Prevalent approaches are mostly implemented in a bottom-up manner, which uses a large set of annotated data to train models based on crowd-sourced opinions about morality.
This work proposes a flexible top-down framework to steer (Large) Language Models (LMs) to perform moral reasoning with well-established moral theories from interdisciplinary research.
arXiv Detail & Related papers (2023-08-29T15:57:32Z) - Navigating Surveillance Capitalism: A Critical Analysis through
philosophical perspectives in Computer Ethics [0.0]
Surveillance capitalism is the practice of collecting and analyzing massive amounts of user data.
Tech companies like Google and Facebook use users' personal information to deliver personalized content and advertisements.
Another example of surveillance capitalism is the use of military technology to collect and analyze data for national security purposes.
arXiv Detail & Related papers (2023-05-05T18:37:56Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Technology Ethics in Action: Critical and Interdisciplinary Perspectives [0.0]
In response to concerns about the social harms associated with digital technologies, many individuals and institutions have articulated the need for a greater emphasis on ethics in digital technology.
This interdisciplinary issue takes up these questions, interrogating the relationships among ethics, technology, and society in action.
arXiv Detail & Related papers (2022-02-03T00:41:53Z) - Scruples: A Corpus of Community Ethical Judgments on 32,000 Real-Life
Anecdotes [72.64975113835018]
Motivated by descriptive ethics, we investigate a novel, data-driven approach to machine ethics.
We introduce Scruples, the first large-scale dataset with 625,000 ethical judgments over 32,000 real-life anecdotes.
Our dataset presents a major challenge to state-of-the-art neural language models, leaving significant room for improvement.
arXiv Detail & Related papers (2020-08-20T17:34:15Z) - Aligning AI With Shared Human Values [85.2824609130584]
We introduce the ETHICS dataset, a new benchmark that spans concepts in justice, well-being, duties, virtues, and commonsense morality.
We find that current language models have a promising but incomplete ability to predict basic human ethical judgements.
Our work shows that progress can be made on machine ethics today, and it provides a steppingstone toward AI that is aligned with human values.
arXiv Detail & Related papers (2020-08-05T17:59:16Z) - Ethics in the digital era [0.0]
Ethics is an ancient matter for human kind, from the origin of civilizations ethics have been related with the most relevant human concerns and determined cultures.
The undergoing digital revolution enabled by Artificial Intelligence and Data are bringing ethical wicked problems in the social application of these technologies.
arXiv Detail & Related papers (2020-03-14T01:32:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.