A framework for fostering transparency in shared artificial intelligence
models by increasing visibility of contributions
- URL: http://arxiv.org/abs/2103.03610v1
- Date: Fri, 5 Mar 2021 11:28:50 GMT
- Title: A framework for fostering transparency in shared artificial intelligence
models by increasing visibility of contributions
- Authors: Iain Barclay, Harrison Taylor, Alun Preece, Ian Taylor, Dinesh Verma,
Geeth de Mel
- Abstract summary: This paper presents a novel method for deriving a quantifiable metric capable of ranking the overall transparency of the process pipelines used to generate AI systems.
The methodology for calculating the metric, and the type of criteria that could be used to make judgements on the visibility of contributions to systems are evaluated.
- Score: 0.6850683267295249
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Increased adoption of artificial intelligence (AI) systems into scientific
workflows will result in an increasing technical debt as the distance between
the data scientists and engineers who develop AI system components and
scientists, researchers and other users grows. This could quickly become
problematic, particularly where guidance or regulations change and
once-acceptable best practice becomes outdated, or where data sources are later
discredited as biased or inaccurate. This paper presents a novel method for
deriving a quantifiable metric capable of ranking the overall transparency of
the process pipelines used to generate AI systems, such that users, auditors
and other stakeholders can gain confidence that they will be able to validate
and trust the data sources and contributors in the AI systems that they rely
on. The methodology for calculating the metric, and the type of criteria that
could be used to make judgements on the visibility of contributions to systems
are evaluated through models published at ModelHub and PyTorch Hub, popular
archives for sharing science resources, and is found to be helpful in driving
consideration of the contributions made to generating AI systems and approaches
towards effective documentation and improving transparency in machine learning
assets shared within scientific communities.
Related papers
- Private Knowledge Sharing in Distributed Learning: A Survey [50.51431815732716]
The rise of Artificial Intelligence has revolutionized numerous industries and transformed the way society operates.
It is crucial to utilize information in learning processes that are either distributed or owned by different entities.
Modern data-driven services have been developed to integrate distributed knowledge entities into their outcomes.
arXiv Detail & Related papers (2024-02-08T07:18:23Z) - Towards a Responsible AI Metrics Catalogue: A Collection of Metrics for
AI Accountability [28.67753149592534]
This study bridges the accountability gap by introducing our effort towards a comprehensive metrics catalogue.
Our catalogue delineates process metrics that underpin procedural integrity, resource metrics that provide necessary tools and frameworks, and product metrics that reflect the outputs of AI systems.
arXiv Detail & Related papers (2023-11-22T04:43:16Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - FAIR principles for AI models, with a practical application for
accelerated high energy diffraction microscopy [1.9270896986812693]
We showcase how to create and share FAIR data and AI models within a unified computational framework.
We describe how this domain-agnostic computational framework may be harnessed to enable autonomous AI-driven discovery.
arXiv Detail & Related papers (2022-07-01T18:11:12Z) - Providing Assurance and Scrutability on Shared Data and Machine Learning
Models with Verifiable Credentials [0.0]
Practitioners rely on AI developers to have used relevant, trustworthy data.
Scientists can issue signed credentials attesting to qualities of their data resources.
The BOM provides a traceable record of the supply chain for an AI system.
arXiv Detail & Related papers (2021-05-13T15:58:05Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.