Nose to Glass: Looking In to Get Beyond
- URL: http://arxiv.org/abs/2011.13153v2
- Date: Tue, 1 Dec 2020 13:24:07 GMT
- Title: Nose to Glass: Looking In to Get Beyond
- Authors: Josephine Seah
- Abstract summary: An increasing amount of research has been conducted under the banner of enhancing responsible artificial intelligence.
Research aims to address, alleviating, and eventually mitigating the harms brought on by the roll out of algorithmic systems.
However, implementation of such tools remains low.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Brought into the public discourse through investigative work by journalists
and scholars, awareness of algorithmic harms is at an all-time high. An
increasing amount of research has been conducted under the banner of enhancing
responsible artificial intelligence (AI), with the goal of addressing,
alleviating, and eventually mitigating the harms brought on by the roll out of
algorithmic systems. Nonetheless, implementation of such tools remains low.
Given this gap, this paper offers a modest proposal: that the field,
particularly researchers concerned with responsible research and innovation,
may stand to gain from supporting and prioritising more ethnographic work. This
embedded work can flesh out implementation frictions and reveal organisational
and institutional norms that existing work on responsible artificial
intelligence AI has not yet been able to offer. In turn, this can contribute to
more insights about the anticipation of risks and mitigation of harm. This
paper reviews similar empirical work typically found elsewhere, commonly in
science and technology studies and safety science research, and lays out
challenges of this form of inquiry.
Related papers
- The Narrow Depth and Breadth of Corporate Responsible AI Research [3.364518262921329]
We show that the majority of AI firms show limited or no engagement in this critical subfield of AI.
Leading AI firms exhibit significantly lower output in responsible AI research compared to their conventional AI research.
Our results highlight the urgent need for industry to publicly engage in responsible AI research.
arXiv Detail & Related papers (2024-05-20T17:26:43Z) - A Disruptive Research Playbook for Studying Disruptive Innovations [11.619658523864686]
We propose a research playbook with the goal of providing a guide to formulate compelling and socially relevant research questions.
We show it can be used to question the impact of two current disruptive technologies: AI and AR/VR.
arXiv Detail & Related papers (2024-02-20T19:13:36Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Towards Possibilities & Impossibilities of AI-generated Text Detection:
A Survey [97.33926242130732]
Large Language Models (LLMs) have revolutionized the domain of natural language processing (NLP) with remarkable capabilities of generating human-like text responses.
Despite these advancements, several works in the existing literature have raised serious concerns about the potential misuse of LLMs.
To address these concerns, a consensus among the research community is to develop algorithmic solutions to detect AI-generated text.
arXiv Detail & Related papers (2023-10-23T18:11:32Z) - Human-Centered Responsible Artificial Intelligence: Current & Future
Trends [76.94037394832931]
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence.
All of this work is aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI.
In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map current and future research trends.
arXiv Detail & Related papers (2023-02-16T08:59:42Z) - AI Security for Geoscience and Remote Sensing: Challenges and Future
Trends [16.001238774325333]
This paper reviews the current development of AI security in the geoscience and remote sensing field.
It covers the following five important aspects: adversarial attack, backdoor attack, federated learning, uncertainty and explainability.
To the best of the authors' knowledge, this paper is the first attempt to provide a systematic review of AI security-related research in the geoscience and RS community.
arXiv Detail & Related papers (2022-12-19T10:54:51Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Characterising Research Areas in the field of AI [68.8204255655161]
We identified the main conceptual themes by performing clustering analysis on the co-occurrence network of topics.
The results highlight the growing academic interest in research themes like deep learning, machine learning, and internet of things.
arXiv Detail & Related papers (2022-05-26T16:30:30Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z) - A narrowing of AI research? [0.0]
We study the evolution of the thematic diversity of AI research in academia and the private sector.
We measure the influence of private companies in AI research through the citations they receive and their collaborations with other institutions.
arXiv Detail & Related papers (2020-09-22T08:23:56Z) - The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI
Research Reduce Misuse? [0.0]
There is growing concern over the potential misuse of artificial intelligence (AI) research.
Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse.
This paper addresses the balance between these two effects.
arXiv Detail & Related papers (2019-12-27T10:20:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.