Towards Equitable Agile Research and Development of AI and Robotics
- URL: http://arxiv.org/abs/2402.08242v1
- Date: Tue, 13 Feb 2024 06:13:17 GMT
- Title: Towards Equitable Agile Research and Development of AI and Robotics
- Authors: Andrew Hundt, Julia Schuller, Severin Kacianka
- Abstract summary: We propose a framework for adapting widely practiced Research and Development (R&D) project management methodologies to build organizational equity capabilities.
We describe how project teams can organize and operationalize the most promising practices, skill sets, organizational cultures, and methods to detect and address rights-based fairness, equity, accountability, and ethical problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning (ML) and 'Artificial Intelligence' ('AI') methods tend to
replicate and amplify existing biases and prejudices, as do Robots with AI. For
example, robots with facial recognition have failed to identify Black Women as
human, while others have categorized people, such as Black Men, as criminals
based on appearance alone. A 'culture of modularity' means harms are perceived
as 'out of scope', or someone else's responsibility, throughout employment
positions in the 'AI supply chain'. Incidents are routine enough
(incidentdatabase.ai lists over 2000 examples) to indicate that few
organizations are capable of completely respecting peoples' rights; meeting
claimed equity, diversity, and inclusion (EDI or DEI) goals; or recognizing and
then addressing such failures in their organizations and artifacts. We propose
a framework for adapting widely practiced Research and Development (R&D)
project management methodologies to build organizational equity capabilities
and better integrate known evidence-based best practices. We describe how
project teams can organize and operationalize the most promising practices,
skill sets, organizational cultures, and methods to detect and address
rights-based fairness, equity, accountability, and ethical problems as early as
possible when they are often less harmful and easier to mitigate; then monitor
for unforeseen incidents to adaptively and constructively address them. Our
primary example adapts an Agile development process based on Scrum, one of the
most widely adopted approaches to organizing R&D teams. We also discuss
limitations of our proposed framework and future research directions.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - The Switch, the Ladder, and the Matrix: Models for Classifying AI Systems [0.0]
There still exists a gap between principles and practices in AI ethics.
One major obstacle organisations face when attempting to operationalise AI Ethics is the lack of a well-defined material scope.
arXiv Detail & Related papers (2024-07-07T12:16:01Z) - LLM-Driven Robots Risk Enacting Discrimination, Violence, and Unlawful Actions [3.1247504290622214]
Research has raised concerns about the potential for Large Language Models to produce discriminatory outcomes and unsafe behaviors in real-world robot experiments and applications.
We conduct an HRI-based evaluation of discrimination and safety criteria on several highly-rated LLMs.
Our results underscore the urgent need for systematic, routine, and comprehensive risk assessments and assurances to improve outcomes.
arXiv Detail & Related papers (2024-06-13T05:31:49Z) - General Purpose Artificial Intelligence Systems (GPAIS): Properties,
Definition, Taxonomy, Societal Implications and Responsible Governance [16.030931070783637]
General-Purpose Artificial Intelligence Systems (GPAIS) has been defined to refer to these AI systems.
To date, the possibility of an Artificial General Intelligence, powerful enough to perform any intellectual task as if it were human, or even improve it, has remained an aspiration, fiction, and considered a risk for our society.
This work discusses existing definitions for GPAIS and proposes a new definition that allows for a gradual differentiation among types of GPAIS according to their properties and limitations.
arXiv Detail & Related papers (2023-07-26T16:35:48Z) - Intent-aligned AI systems deplete human agency: the need for agency
foundations research in AI safety [2.3572498744567127]
We argue that alignment to human intent is insufficient for safe AI systems.
We argue that preservation of long-term agency of humans may be a more robust standard.
arXiv Detail & Related papers (2023-05-30T17:14:01Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Towards Fairness Certification in Artificial Intelligence [31.920661197618195]
We propose a first joint effort to define the operational steps needed for AI fairness certification.
We will overview the criteria that should be met by an AI system before coming into official service and the conformity assessment procedures useful to monitor its functioning for fair decisions.
arXiv Detail & Related papers (2021-06-04T14:12:12Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.