Technologies for Trustworthy Machine Learning: A Survey in a
Socio-Technical Context
- URL: http://arxiv.org/abs/2007.08911v3
- Date: Thu, 20 Jan 2022 13:42:29 GMT
- Title: Technologies for Trustworthy Machine Learning: A Survey in a
Socio-Technical Context
- Authors: Ehsan Toreini, Mhairi Aitken, Kovila P. L. Coopamootoo, Karen Elliott,
Vladimiro Gonzalez Zelaya, Paolo Missier, Magdalene Ng, Aad van Moorsel
- Abstract summary: We argue that four categories of system properties are instrumental in achieving the policy objectives, namely fairness, explainability, auditability and safety & security (FEAS)
We discuss how these properties need to be considered across all stages of the machine learning life cycle, from data collection through run-time model inference.
We conclude with an identification of open research problems, with a particular focus on the connection between trustworthy machine learning technologies and their implications for individuals and society.
- Score: 4.866589122417036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concerns about the societal impact of AI-based services and systems has
encouraged governments and other organisations around the world to propose AI
policy frameworks to address fairness, accountability, transparency and related
topics. To achieve the objectives of these frameworks, the data and software
engineers who build machine-learning systems require knowledge about a variety
of relevant supporting tools and techniques. In this paper we provide an
overview of technologies that support building trustworthy machine learning
systems, i.e., systems whose properties justify that people place trust in
them. We argue that four categories of system properties are instrumental in
achieving the policy objectives, namely fairness, explainability, auditability
and safety & security (FEAS). We discuss how these properties need to be
considered across all stages of the machine learning life cycle, from data
collection through run-time model inference. As a consequence, we survey in
this paper the main technologies with respect to all four of the FEAS
properties, for data-centric as well as model-centric stages of the machine
learning system life cycle. We conclude with an identification of open research
problems, with a particular focus on the connection between trustworthy machine
learning technologies and their implications for individuals and society.
Related papers
- Trustworthy Representation Learning Across Domains [22.54626834599221]
We introduce the details of the proposed trustworthy framework for representation learning across domains.
We provide basic notions and comprehensively summarize existing methods for the trustworthy framework from four concepts.
arXiv Detail & Related papers (2023-08-23T08:38:54Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Identifying Roles, Requirements and Responsibilities in Trustworthy AI
Systems [2.28438857884398]
We consider an AI system from the domain practitioner's perspective and identify key roles that are involved in system deployment.
We consider the differing requirements and responsibilities of each role, and identify a tension between transparency and privacy that needs to be addressed.
arXiv Detail & Related papers (2021-06-15T16:05:10Z) - From Distributed Machine Learning to Federated Learning: A Survey [49.7569746460225]
Federated learning emerges as an efficient approach to exploit distributed data and computing resources.
We propose a functional architecture of federated learning systems and a taxonomy of related techniques.
We present the distributed training, data communication, and security of FL systems.
arXiv Detail & Related papers (2021-04-29T14:15:11Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Enterprise AI Canvas -- Integrating Artificial Intelligence into
Business [0.0]
The Enterprise AI canvas is designed to bring Data Scientist and business expert together to discuss and define all relevant aspects.
It consists of two parts where part one focuses on the business view and organizational aspects, whereas part two focuses on the underlying machine learning model and the data it uses.
arXiv Detail & Related papers (2020-09-18T07:30:56Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Towards CRISP-ML(Q): A Machine Learning Process Model with Quality
Assurance Methodology [53.063411515511056]
We propose a process model for the development of machine learning applications.
The first phase combines business and data understanding as data availability oftentimes affects the feasibility of the project.
The sixth phase covers state-of-the-art approaches for monitoring and maintenance of a machine learning applications.
arXiv Detail & Related papers (2020-03-11T08:25:49Z) - Knowledge Federation: A Unified and Hierarchical Privacy-Preserving AI
Framework [25.950286526030645]
We propose a comprehensive framework (called Knowledge Federation - KF) to address challenges by enabling AI while preserving data privacy and ownership.
KF consists of four levels of federation: (1) information level, low-level statistics and computation of data, meeting the requirements of simple queries, searching and simplistic operators; (2) model level, supporting training, learning, and inference; (3) cognition level, enabling abstract feature representation at various levels of abstractions and contexts; (4) knowledge level, fusing knowledge discovery, representation, and reasoning.
We have developed a reference implementation of KF, called iBond Platform, to offer a production-quality
arXiv Detail & Related papers (2020-02-05T05:23:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.